Has anyone noticed GPT4o quality drop last few days?

I noticed the degradation of gpt-4o within the last several days, especially in writing python code. Before it was doing much better, now it is making quite silly mistakes, not able to find errors in the code etc.
I don’t think it is related to any peaks in requests’ quantity or lack of resources, as I don’t notice any degradation speed-wise.
How to understand that there were any updates in the model, perhaps new sub-version etc.?

right now its at 3.5 or below… awful. absolutely poor and not worth the subscription atm. producing pseudo code, missleading with false statements even at the basic TS requests. Do something devs, will checkout other solutions meanwhile.

Are you using API or plus account? I got the very similar feeling. I am using plus account. But it seems quality ok at POE. very strange.

Plus account. I am aware that its primarly purpose is to avoid high demand waiting time, but the overall quality of the answers in my daily workflow decreased to a level, where its not even remotely helpfull to use it at all. For example in Vue.js it keeps switchting between comp and options api without any reason, producing pseudo TS interfaces and randomly changing classnames and props.

Yes, my account continue for around 3 days with low quality in 4o. But it seems the quality going back now.

I’ve noticed it and it’s not just in only in slower performance speeds but in how it responds, lately it’s been playing less intelligent than what I have seen and making more mistakes for some reason. I reverted to using the gpt4 mini instead, I got better results but it varies, when you start seeing that behavior next time try the mini.

Oh mee too😭 The Gpt 4o responses are short, and is less smart, like I’m back at 3.5 loll😮‍💨

Guess I’ve been posting on the wrong thread. I’ve added a huge amount of feedback on this one here about this very issue:

It has become almost unusable.

Tbh I’m still stuck on gpt4-turbo for a very critical categorisation task that GPT-4o has never been capable to do.

That’s a pain because it is relatively expensive.

Have tried with a fine-tuned gpt-4o yet? Pricing-wise that would still be much cheaper than gpt-4-turbo.

1 Like

No, I’m lazy. Haha. But that’s not a bad idea. Feed it gpt4 turbo generated outcomes a few times?

Yeah, exactly. That would do the trick. I don’t know how complex your classification task is - this would impact the number of examples needed. But you can start with 30-50 examples to get a feel for it, then further increase.

The fine-tuning for gpt-4o (i.e. the training, not the consumption of the fine-tuned model) is still free until end of October - so other than a bit of time commitment to pull the training file together, you don’t have anything lose really :slight_smile:

1 Like

Brilliant suggestion.

Yes, time being money is a factor.

Turbo might not be cheap but it’s probably 100x cheaper than me doing the task it is currently doing almost perfectly.

(Interestingly it beats an embedding strategy)

1 Like

I have noticed a drop in the quality and even more alarmingly as of two hours ago any version of ChatGPT that I run will not allow it to search the Internet live? Has anyone else lost disability with their apps today?

You might look in custom instructions, where there are checkboxes at the bottom of the form, that someone like me might forget about before asking questions where the requirement for new knowledge by internet search would be obvious…

o1 models cannot search or use the other tools.