ChatGPT is an AI chatbot developed by OpenAI, trained on massive text data to generate human-like text responses to prompts. The current version is GPT3.5, and it has been used to pass exams, write software, deliver sermons, and give relationship advice. However, GPT-4, the upcoming version, has been upgraded to have ‘multimodal models’ that can generate content in multiple formats such as audio clips, images, and video clips from a text prompt. Microsoft Germany CTO Andreas Braun revealed that GPT-4 will be released this week, and the release has sparked rumors about the features it will possess, including its ability to handle longer text prompts, output text, images, sounds, and videos, and generate computer code.
GPT-4 and Multimodal Models
GPT-4 is expected to be launched this week with a new feature called ‘multimodal models.’ This feature is an upgrade from the current version, which is limited to providing responses as text. The multimodal models will allow ChatGPT to generate content in various formats such as audio clips, images, and video clips from a text prompt. This means that ChatGPT will be able to offer users different possibilities, including the generation of unique videos.
The multimodal models in GPT-4 will have significantly more parameters than the current version, making it more human-like. It will have more options for the next word or next sentence in a given context. ChatGPT’s new capabilities have the potential to transform the way people use AI chatbots, as users will no longer be limited to text responses.
The Impact of GPT-4 on the AI Chatbot Market
The release of GPT-4 is expected to have a significant impact on the AI chatbot market. ChatGPT’s success and OpenAI’s collaboration with Microsoft has pushed other tech giants to develop their AI chatbots. Google’s AI chatbot, Bard, was released after Google was “rushed” into releasing its own AI chatbot when Bard got a question wrong in a promotional video, wiping £100 billion from the company’s value.
The release of GPT-4 also has the potential to affect other industries such as video creation. Rival tech giant Meta has its own AI system, Make-A-Video, which generates videos from text prompts. However, the resulting clips tend to be blurry and lack sound. With GPT-4’s release, ChatGPT will become the first AI chatbot to generate videos with high resolution and quality.
GPT-4, the upcoming version of ChatGPT, will be launched this week with new features, including multimodal models that allow it to generate content in multiple formats from a text prompt. The new capabilities of ChatGPT have the potential to transform the AI chatbot market, making it possible to generate videos with high resolution and quality. The release of GPT-4 is expected to have a significant impact on other industries as well, and it is eagerly awaited by users and tech enthusiasts alike.
The concept of generating videos from text prompts is not entirely new. In September of the previous year, Meta, another major tech giant, unveiled its own AI system that generates videos from text prompts. Meta’s system, Make-A-Video, was trained on images with captions to help it learn about the world and how it is described, and unlabeled videos to determine how the world moves. However, the resulting clips, while impressive, tend to be blurry and lack sound.
ChatGPT-4 is OpenAI’s first foray into video generation, but the company has already developed a text-to-image AI, DALL-E. In 2020, the company also announced Jukebox, a tool that creates music from a prompt and can mimic the style of different artists.
According to Mr Braun, the new ChatGPT will “make the models comprehensive.” At the ‘AI in Focus’ event, which was broadcast to Microsoft partners and potential customers, he did not reveal whether GPT-4 would be released by itself or as part of a product. The tech company does have an event planned for Thursday which is due to showcase “the future of AI,” which may provide more information.
Rumours about what this update will look like have been swirling since 2021, with Wired speculating that it will use 100 trillion parameters. These will give it a lot more “next word” or “next sentence” options in a given context than it has currently, making it more human-like. However, OpenAI CEO Sam Altman has dismissed these rumours, calling them “total bulls**t.”
Others have said GPT-4 will be better at generating computer code, handle longer text prompts, and be able to output text, images, sounds, and videos. Mr Altman told the podcast “AI for the Next Era”: “I think we’ll get multimodal models in not that much longer, and that’ll open up new things.”
The new ChatGPT-4 update is a significant development in the field of AI-assisted language processing, particularly in the area of generating multimodal content. Its ability to produce text, images, sounds, and videos from a single text prompt will have implications in a range of industries, from entertainment and advertising to education and healthcare. However, as with any technological advancement, there are concerns about the potential misuse of AI-generated content, particularly in the realm of disinformation and fake news. As such, it is crucial that developers of AI systems like ChatGPT-4 prioritize ethical considerations in their design and implementation.