OpenAI Launches GPT-4, a Multimodal AI System with Improved Creativity and Reduced Bias.

OpenAI, the renowned AI research lab, has recently released GPT-4, the latest version of its language model system that powers ChatGPT. This upgraded system is designed to be more creative, less biased, and less likely to make up facts compared to its predecessor. The new model is touted as a “multimodal” model, which means it can accept images as well as text as inputs, allowing users to ask questions about pictures.

According to OpenAI co-founder Sam Altman, GPT-4 is the most capable and aligned model yet, which can handle massive text inputs and remember and act on more than 20,000 words at once. This feature lets it take an entire novella as a prompt, making it incredibly powerful. The latest version is now available for users of ChatGPT Plus, the paid-for version of the ChatGPT chatbot, which provided some of the training data for the latest release.

OpenAI has also partnered with commercial companies to offer GPT-4-powered services. For instance, a new subscription tier of the language learning app Duolingo, Duolingo Max, will now offer English-speaking users AI-powered conversations in French or Spanish, and can use GPT-4 to explain the mistakes language learners have made. Stripe, a payment processing company, is using GPT-4 to answer support questions from corporate users and to help flag potential scammers in the company’s support forums.

Duolingo’s principal product manager, Edwin Bodge, noted that artificial intelligence has always been a massive part of the company’s strategy, using it for personalizing lessons and running Duolingo English tests. However, the company wanted to fill gaps in the learner’s journey, such as conversation practice and contextual feedback on mistakes. The company’s experiments with GPT-4 convinced it that the technology was capable of providing those features, with “95%” of the prototype created within a day.

During a demo of GPT-4 on Tuesday, Open AI president and co-founder Greg Brockman also gave users a sneak peek at the image-recognition capabilities of the newest version of the system. The function will allow GPT-4 to analyze and respond to images that are submitted alongside prompts and answer questions or perform tasks based on those images. Brockman noted that GPT-4 is not just a language model; it is also a vision model that can flexibly accept inputs that intersperse images and text arbitrarily, like a document.

OpenAI claims that GPT-4 fixes or improves upon many of the criticisms that users had with the previous version of its system. However, it still “hallucinates” facts, which means it may make up information when it doesn’t know the exact answer. Therefore, OpenAI warns users to take great care when using language model outputs, especially in high-stakes contexts. Great caution is necessary to avoid providing upsetting or abusive responses when given the wrong prompts.

GPT-4 is a significant improvement over its predecessor, as it is more capable, less biased, and less likely to make up facts. Its new features, such as multimodal inputs and image recognition capabilities, make it more versatile and powerful. However, users must remain vigilant and take caution when using its outputs, especially in high-stakes contexts.

What is your reaction?

0
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly

You may also like

Leave a reply

Your email address will not be published. Required fields are marked *

More in Computers