GPT-4, the newest version of OpenAI’s AI language model, has been released.

0
136

GPT-4, the most recent in OpenAI’s line of AI language models that power programs like ChatGPT and the new Bing, has been officially released after months of whispers and conjecture.

The model, according to the business, “can tackle challenging issues with better accuracy” and is “more creative and collaborative than ever before.” It can interpret both text and image input, but it only accepts text responses. Moreover, OpenAI warns that the systems still have many of the same issues as earlier language models, such as the propensity to “hallucinate” and the ability to produce offensive and violent material.

OpenAI claims to have already established partnerships with several businesses to include GPT-4 into their products, including as Duolingo, Stripe, and Khan Academy. The new model powers Microsoft’s Bing chatbot and is accessible to the general public via ChatGPT Plus, OpenAI’s $20 per month ChatGPT membership. Developers will be able to access it as an API to build on. (There is a queue here; according to OpenAI, users will be added starting today.)

 

The performance of the system on various assessments and benchmarks, including as the Uniform Bar Test, LSAT, SAT Math, and SAT Evidence-Based Reading & Writing exams, is claimed to demonstrate GPT-4’s advances. GPT-4 achieved results in the 88th percentile or higher on the exams indicated, and a complete list of exams and the system’s results may be found on OpenAI’s website.

Throughout the past year, there has been a lot of speculation regarding GPT-4 and its potential, with many people predicting a significant improvement over current systems. But, as the business had previously cautioned, it appears from OpenAI’s release that the improvement is more iterative.

Altman remarked in a January interview on GPT-4, “People are begging to be disappointed and they will be. The hype is basically saying that we don’t actually have an AI, even if that is sort of what is expected of us.

After a Microsoft executive revealed in an interview with the German press that the system will debut this week, the rumor mill gained additional momentum last week.

 

The road to GPT-4 has been a long one; OpenAI and AI language models in general gained popularity gradually over several years before exploding into the mainstream in recent months.

GPT-2 and GPT-3 will be announced in 2019 and 2020, respectively. The original research report that introduced GPT was released in 2018. These models are developed using sizable text datasets that are heavily web-scraped and statistically mined. The following step is to guess what word will come next using these patterns. Although the method is fairly straightforward to explain, the end product is a flexible system that can produce, summarize, and rephrase writing as well as carry out other text-based operations like translating or creating code.

 

OpenAI initially postponed the distribution of its GPT models because of concern that they would be misused to spread false information and spam. However in late 2022, the business released ChatGPT, an open-access talking chatbot built on GPT-3.5. The launch of ChatGPT set off a tech industry frenzy, with Google racing to catch up and Microsoft quickly following with its own AI chatbot Bing (a component of the Bing search engine).

As expected, issues and difficulties have arisen as a result of the increased accessibility of these AI language models. The usage of software that can produce decent undergraduate essays is still being adapted by the educational system. Online publications like Stack Overflow and the science fiction magazine Clarkesworld have had to stop accepting contributions as a result of an inflow of AI-generated content. Yet, other experts contend that the negative consequences haven’t been as severe as initially thought.

The system underwent six months of safety training, according to OpenAI, and in internal tests it was “80% less likely to reply to requests for banned content and 40% more likely to deliver factual responses than GPT-3.5,” according to the company’s release of GPT-4.

That doesn’t mean, though, that the system never makes errors or produces damaging material. For instance, Microsoft admitted that its Bing chatbot had been using GPT-4 all along, and numerous users were able to get over the chatbot’s security measures in a variety of inventive ways, causing it to give risky advise, threaten individuals, and fabricate information. Moreover, GPT-4 is still ignorant of occurrences “that have taken place after the great majority of its data cutoff” in September 2021.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here