GPT - 4 is dumbed down (nerfed)? GPT-4 vision is only for specific group of people as of now (Nov 2023)?

Did OpenAI Nerf GPT-4? Rumors Swirl Amid FTC Investigation

Rumors have been swirling around the internet that OpenAI has nerfed the performance of GPT-4, its largest and most capable model available to the public. Users on Twitter and the OpenAI developer forum were calling the model “lazier” and “dumber” after it appeared to be giving faster but less accurate answers compared to the slower but more precise responses it initially gave.

An Insider report says the industry insiders are questioning whether OpenAI has redesigned its GPT-4 model. Some have said the company could be creating a group of smaller GPT-4 models that could act as one model and be less expensive to run. This approach is called a Mixture of Experts, or MOE, where smaller expert models are trained on specific tasks and subject areas. When asked a question, GPT-4 would know which model to query and might send a query to more than one of these expert models and mash up the results. OpenAI did not respond to Insider’s request for comment on this matter.

Whether or not GPT-4 is actually “dumber,” OpenAI is also in the news this week due to a new investigation opened by the Federal Trade Commission.

The FTC is looking into whether ChatGPT has harmed consumers through its collection of data and publication of false information on individuals. The agency sent a 20-page letter to OpenAI this week with dozens of questions about how the startup trains its models and how it governs personal data.

The letter detailed how the FTC is examining whether OpenAI “engaged in unfair or deceptive privacy or data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers.”

===
What is your take on this issue? In the developer days’ Sam Altman said that the company “cut” prices of the new GPT-4 with higher performance. However, how about the user experiences? Do they experience slow downs or improved performance?


Open AI’s promise:
[

GPT-4 and GPT-4 Turbo

](OpenAI Platform)

GPT-4 is a large multimodal model (accepting text or image inputs and outputting text) that can solve difficult problems with greater accuracy than any of our previous models, thanks to its broader general knowledge and advanced reasoning capabilities. GPT-4 is available in the OpenAI API to paying customers. Like gpt-3.5-turbo, GPT-4 is optimized for chat but works well for traditional completions tasks using the Chat Completions API. Learn how to use GPT-4 in our GPT guide.

MODEL DESCRIPTION CONTEXT WINDOW TRAINING DATA
gpt-4-1106-preview GPT-4 Turbo

New

The latest GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. This preview model is not yet suited for production traffic. Learn more.|128,000 tokens|Up to Apr 2023|
|gpt-4-vision-preview|GPT-4 Turbo with vision

New

Ability to understand images, in addition to all other GPT-4 Turbo capabilties. Returns a maximum of 4,096 output tokens. This is a preview model version and not suited yet for production traffic. Learn more.|128,000 tokens|Up to Apr 2023|


In my use case test, you will need to at least pay $250 to access this new feature, which is a bummer, while the chatGPT 4 is stuck on a total of input/output 4k context length.

I keep seeing this $250 fee mentioned again and again. I am a plus user with no additional credits that I am aware of, and I received API (not Chat) and Playground access to gpt-4-1106-preview the day it was announced. I do not understand what this $250 barrier to API access is.

To be honest, it does feel like I get different models depending on the day, or even time of day. Right now, it’s really fast (so fast that my last session was actually the first time I hit the 50 output limit, when I rarely used to go past 25 in a sitting before), but outputs seem to be less detailed and creative. Yesterday, I was consistently getting good outputs using a similar prompt (with only the specifics changed).

There’s also no question in my mind that there was a major change in the model’s functioning sometime last week just before the new UI was implemented, because I had to make a few adjustments to my prompt template to get anything close to what I was getting before. It’s actually pretty irritating. It’s at least the second time it’s happened, and the changes always seem worse when it does.

Edit: It seems better now. Not as good as last week, but better than what I got last session. Hope it doesn’t degrade again.

The new models and modes introduced at DevDay are not widely deployed or production-ready.

gpt-4-1106 models state “preview” right in the name, and are more limited in daily use than ChatGPT plus for a consumer.

There was an immediate transition from the prior state of “all this stuff is super-secret”, but a slow transition to “everyone can use these to the final degree planned”.

If new developments are “nerfed”, unsatisfactory, error-prone, and not a suitable replacement, or have safety concerns, then this is exactly the roll-out one would hope for. If the disingenuous “better instruction following” CEO speech is not true, this is the time to express that your company cannot use the replacement slated to become the main model December 11.

Tier-4 users are the apparent next group to trial gpt-4-vision-preview, after it just being something exclusive to $13 billion dollar investor Microsoft and a small selection of partners. Filling your account with $250+ total paid to OpenAI, if already meeting the tier 3 requirements, puts you into that tier.

1 Like