Home » OpenAI Releases GPT-4, A Multimodal AI
gpt4

OpenAI Releases GPT-4, A Multimodal AI

According to OpenAI, GPT-4 was developed over a six-month period using lessons learned from an internal adversarial testing program and ChatGPT.

The company claims that the new model produces the “best-ever results” in terms of factuality, steerability, and staying within guardrails.

Like previous GPT models, GPT-4 was trained using publicly available data and data that OpenAI licensed. OpenAI collaborated with Microsoft to develop a supercomputer in the Azure cloud for training GPT-4.

One of the most notable improvements of GPT-4 is its ability to understand images as well as text, and it can even interpret relatively complex images.

This feature is currently being tested with a single partner, Be My Eyes, where GPT-4 powers the Virtual Volunteer feature. A more meaningful improvement in GPT-4 is its steerability tooling, which allows developers to prescribe style and task by describing specific directions using “system” messages.

GPT-4 still makes reasoning errors and “hallucinates” facts, and it lacks knowledge of events that have occurred after September 2021. However, OpenAI notes that GPT-4 is less likely to refuse requests on how to synthesize dangerous chemicals and is 82% less likely overall to respond to requests for “disallowed” content compared to GPT-3.5.

Despite its imperfections, OpenAI is confident that GPT-4 will become a valuable tool in improving people’s lives by powering many applications.

See also  MultiversX (EGLD) Partners With Tencent Cloud

Related Posts

Leave a Comment