We really are thundering into the AI era with speed. It feels like only yesterday that ChatGPT redefined the power of an AI language model, and now its successor has arrived in the form of GPT-4.
Right now there is a lot of buzz around this latest iteration of the now famous AI, and naturally there are some large exaggerations being thrown about so I thought it would be both informative and comforting to have a proper look at what is so special about GPT-4.
What are the differences?
Whilst on the surface there might not appear to be much in it, the key differences between ChatGPT and GPT-4 are quite significant. GPT-4 is a large multimodal model that can accept both image and text inputs and emit text outputs. In simpler terms, it's like teaching a computer to learn from more than one type of input, for example, a multimodal learning model might be trained to recognise a cat in a picture based on both the visual appearance of the cat and the text description of the image. By combining information from multiple sources, these models can be more accurate and flexible in understanding the world around them. This is something that will help GPT-4 avoid some of the criticisms that were levelled at ChatGPT.
While GPT-4 is not as capable as humans in many real-world scenarios, it exhibits near human-level performance on various professional and academic benchmarks. Something that has impressed me is that it is able to pass a simulated bar exam with a score around the top 10% of test takers. In contrast, ChatGPT’s score was around the bottom 10%. This is an incredibly impressive feat, and something that should not be shrugged off. An AI now exists with legal knowledge that rivals the top 10%. This isn’t just facts that are being regurgitated, it’s contextualised and applied.
GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than ChatGPT. OpenAI spent six months iteratively aligning GPT-4 using lessons from their adversarial testing program as well as ChatGPT, resulting in their best-ever results on factuality, steerability, and refusing to go outside of guardrails. Additionally, GPT-4 was trained on a supercomputer co-designed by OpenAI and Azure, resulting in unprecedented stability during training.
Some cool things GPT4 can do
- Duolingo's new "Max" subscription uses GPT-4 to help users practise conversational skills with a chatbot in 26 languages.
- GPT-4's image recognition and deciphering feature can recommend recipes based on pictures users upload, including those of the inside of their fridge.
- Google Chrome extensions can be quickly created without coding experience.
- GPT-4's chatbot can help plan holidays, collating information and making suggestions based on preferences.
- An OpenAI employee has shown that GPT-4 can accurately file taxes, but it should not be solely relied upon.
- GPT-4 performs better on technical questions, including maths problems, and can detect trick questions more consistently.
- GPT-4's multimodal function can describe images to blind people and explain why a picture is funny or sad.
- It can produce a fully functional web page from a sketch using its image input function.
GPT-4 is a significant step forward in the AI era. Its ability to learn from multiple sources of input and exhibit near-human level performance on various benchmarks is impressive. GPT-4's reliability, creativity, and ability to handle nuanced instructions make it a valuable tool in a range of applications, from language learning to game development to tax filing. While GPT-4 is not perfect and should not be solely relied upon in certain scenarios, it represents a giant leap forward for mankind in the field of AI.
As we continue to explore the capabilities of GPT-4 and other AI technologies, it's important to remember the potential benefits they can bring to society while also being mindful of their limitations and potential risks.
Read more about our thoughts on AI here.
Dan is part of Crowd's Copywriting team. He has a passion for content marketing and all things words.