I’ve noticed that there has been a massive dip in quality with the GPT-4-1106 Preview model. It used to write very good quality writing; now it seems it’s really bad. Will it ever go back to normal? I notice this tends to happen every time they release a new model. It’s good for a little bit, and then bad.
Did you try the new gpt-4-turbo
aka gpt-4-turbo-2024-04-09
, which should have the best quality.
Hi @ladylucile , I’ve experienced that many times and not only with OpenAI, but also with other providers. I never saw it coming back. As I understand, they improve inference, guardrails, they do some changes in the architecture, etc. And honestly, I don’t think even OpenAI has a full control over what changes of models performance bring those improvements. From my experience - the only thing you can do is try to adopt your prompts or switch to other models.
I have noticed also. What I have seen GPT write lately seriously lags behind the output of GPT 3. I am yet to try GPT4 turbo. Currently it gives me the impression of being somewhat ‘disabled’ and censored. A little like comparing a child learning or pursuing something he/she excels at, and for some reason it’s holding back. you know it can do better but it doesn’t want to. google bard later Gemini is the same, however gemini has vast differences to it’s out that put me off.
is it though? or just the crappy model made faster?
Interesting thought. I have seen, even on my end with my slow connection, the output text was output a lot faster, (I am aware of times of the day being a factor in usage as well) overall, the response was almost instant. the quality somewhat degraded. I have a chat history from gpt3 to fall back on so i can repeat some of my tasks as a comparison (i didn’t think of this before until now…). that would be an interesting. I’d say some of its output has gone from moderate adult level reading, to young teenage level. With the limited time I have spent so far with GPT4 and turbo I have seen a noticeable overall improvement.