Gpt 4 Quality Fluctuation

I’ve noticed that there has been a massive dip in quality with the GPT-4-1106 Preview model. It used to write very good quality writing; now it seems it’s really bad. Will it ever go back to normal? I notice this tends to happen every time they release a new model. It’s good for a little bit, and then bad.

3 Likes

Did you try the new gpt-4-turbo aka gpt-4-turbo-2024-04-09, which should have the best quality.

Hi @ladylucile , I’ve experienced that many times and not only with OpenAI, but also with other providers. I never saw it coming back. As I understand, they improve inference, guardrails, they do some changes in the architecture, etc. And honestly, I don’t think even OpenAI has a full control over what changes of models performance bring those improvements. From my experience - the only thing you can do is try to adopt your prompts or switch to other models.

3 Likes

I have noticed also. What I have seen GPT write lately seriously lags behind the output of GPT 3. I am yet to try GPT4 turbo. Currently it gives me the impression of being somewhat ‘disabled’ and censored. A little like comparing a child learning or pursuing something he/she excels at, and for some reason it’s holding back. you know it can do better but it doesn’t want to. google bard later Gemini is the same, however gemini has vast differences to it’s out that put me off.

1 Like

is it though? or just the crappy model made faster?