OpenAI showcases GPT-4, the next-gen AI language model that makes ChatGPT look like a relic

OpenAI’s new AI language model, GPT-4 will vastly improve the shortcomings of the current, GPT-3.5 enabled ChatGPT. The new GPT-4 supports multimodality and faster computing.

If you come to think of it, ChatGPT, the popular AI text generator is barely 6 months old. The ChatGPT that we know today is based on GPT-3.5 a fairly complex AI language model that took the world by storm, thanks to its abilities. One can only imagine what GPT-4, the next generation of AI language models are capable of.

OpenAI, the startup that developed GPT and ChatGPT have revealed their most capable and aligned AI language model yet. OpenAI has finally revealed the much anticipated GPT-4, the latest in its line of AI language models that power applications like ChatGPT and the new Bing.

How powerful is GPT-4?Putting all rumours about GPT-4 to rest, Sam Altman and OpenAI have finally demoed GPT-4, almost a week earlier than they initially planned to.

According to OpenAI, the model is “more creative and collaborative than ever before” and “can handle challenging issues more accurately.” Although it can parse both text and image data, it can only reply in text, putting to rest the rumours that it can generate videos out of textual prompts.

Also read: New era for Search: Microsoft adds ChatGPT’s power to Bing search engine and Edge browser

OpenAI also warns that the systems have many of the same flaws as previous language models, such as the ability to make up information (or “hallucinate”) and produce violent and harmful text.

Huge Demand for GPT-4Seeing how things went with GPT-3.5 powered ChatGPT, a number of developers and businesses are already willing to invest in the GPT-4 enabled services, even before they get to try out the new language model.

OpenAI has already collaborated with a number of businesses to integrate GPT-4 into their products. These include Duolingo, PayPal, and Khan Academy.

The updated model is now accessible to the general public through ChatGPT Plus, OpenAI’s $20 monthly ChatGPT membership, and can also be found powering Microsoft’s Bing chatbot. It will also be available as an API for coders to use. There already is a pretty long waitlist, and OpenAI claims it will begin accepting new members from Monday.

How is GPT-4 different from ChatGPT’s GPT-3?OpenAI stated in a study blog entry that the difference between GPT-4 and its predecessor GPT-3.5 is “subtle” in casual talk (GPT-3.5 is the model that powers ChatGPT). GPT-4, according to OpenAI CEO Sam Altman, “is still flawed, still restricted,” but it also “seems more remarkable on first use than it does after you spend more time with it.”

OpenAI claims GPT-4’s advances are apparent in the system’s success on a number of tests and standards, including the Uniform Bar Exam, LSAT, SAT Maths, and SAT Evidence-Based Reading & Writing examinations. The new AI language model scored in the 88th percentile or higher on the mentioned tests and many more.

Also read: OpenAI’s ChatGPT passes Wharton’s MBA Exam, while also qualifying for a medical licence in the US

GPT-4 and its powers have been the subject of much speculation over the last year, with many predicting a significant improvement over earlier systems. However, based on OpenAI’s announcement, the increase will be more iterative, as the firm had previously cautioned.

“People are begging to be disappointed and they will be,” Altman said about GPT-4 in a January interview.

GPT-4’s multimodalityThe rumour mill was fueled further last week when a Microsoft official revealed in an interview with the German press that the system would debut this week.

The executive also indicated that the system would be multi-modal, capable of producing not only text but also other forms. Many AI experts think that multi-modal systems that combine text, audio, and video are the best way to develop more capable AI systems.

GPT-4 is indeed multimodal, but in fewer mediums than some predicted. OpenAI says the system can accept both text and image inputs and emit text outputs. The company says the model’s ability to parse text and image simultaneously allows it to interpret more complex input.

GPT-4’s shortcomingsAI language algorithms have caused many issues and problems. The education system is still adjusting to the existence of software that writes pretty articulate college essays; online sites such as Stack Overflow have had to close submissions due to an influx of AI-generated content. However, some experts contend that the negative impacts have been less severe than expected.

OpenAI emphasised in its introduction of GPT-4 that the system had undergone six months of safety training and that in internal tests, it was “82 per cent less likely to respond to requests for prohibited material and 40 per cent more likely to generate accurate answers than GPT-3.5.”

Also read: AI Turns Bond Villain: Microsoft’s AI bot wants to make a deadly virus, steal nuclear launch codes

However, this does not preclude the system from making errors or producing detrimental material. For example, Microsoft disclosed that its Bing chatbot has always been driven by GPT-4, and many users were able to circumvent Bing’s safeguards in a variety of creative ways, causing the bot to give dangerous advice, threaten users, and make up information.

GPT-4 is also still unaware of events “that happened after the overwhelming bulk of its data shut off” in September 2021.

Read all the Latest News, Trending News, Cricket News, Bollywood News,India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.

Similar Articles

Most Popular