Did OpenAI Nerf GPT-4? Rumors Swirl Amid FTC Investigation
Rumors have been swirling around the internet that OpenAI has nerfed the performance of GPT-4, its largest and most capable model available to the public. Users on Twitter and the OpenAI developer forum were calling the model “lazier” and “dumber” after it appeared to be giving faster but less accurate answers compared to the slower but more precise responses it initially gave.
An Insider report says the industry insiders are questioning whether OpenAI has redesigned its GPT-4 model. Some have said the company could be creating a group of smaller GPT-4 models that could act as one model and be less expensive to run. This approach is called a Mixture of Experts, or MOE, where smaller expert models are trained on specific tasks and subject areas. When asked a question, GPT-4 would know which model to query and might send a query to more than one of these expert models and mash up the results. OpenAI did not respond to Insider’s request for comment on this matter.
Whether or not GPT-4 is actually “dumber,” OpenAI is also in the news this week due to a new investigation opened by the Federal Trade Commission.
The FTC is looking into whether ChatGPT has harmed consumers through its collection of data and publication of false information on individuals. The agency sent a 20-page letter to OpenAI this week with dozens of questions about how the startup trains its models and how it governs personal data.
The letter detailed how the FTC is examining whether OpenAI “engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers.”
===
What is your take on this issue? In the developer days’ Sam Altman said that the company “cut” prices of the new GPT-4 with higher performance. However, how about the user experiences? Do they experience slow downs or improved performance?
Open AI’s promise:
[
GPT-4 and GPT-4 Turbo
GPT-4 is a large multimodal model (accepting text or image inputs and outputting text) that can solve difficult problems with greater accuracy than any of our previous models, thanks to its broader general knowledge and advanced reasoning capabilities. GPT-4 is available in the OpenAI API to paying customers. Like gpt-3.5-turbo
, GPT-4 is optimized for chat but works well for traditional completions tasks using the Chat Completions API. Learn how to use GPT-4 in our GPT guide.
MODEL | DESCRIPTION | CONTEXT WINDOW | TRAINING DATA |
---|---|---|---|
gpt-4-1106-preview | GPT-4 Turbo |
New
The latest GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. This preview model is not yet suited for production traffic. Learn more.|128,000 tokens|Up to Apr 2023|
|gpt-4-vision-preview|GPT-4 Turbo with vision
New
Ability to understand images, in addition to all other GPT-4 Turbo capabilties. Returns a maximum of 4,096 output tokens. This is a preview model version and not suited yet for production traffic. Learn more.|128,000 tokens|Up to Apr 2023|
In my use case test, you will need to at least pay $250 to access this new feature, which is a bummer, while the chatGPT 4 is stuck on a total of input/output 4k context length.