ChatGPT 4o insists on delivering very long answers

Recently, the 4o model has generated excessively long responses during sessions.

I try to modify the personalized instructions, asking for short and objective answers, not repeating entire answers already produced (regardless of the subject), not generating repeated programming code, but he seems to obey this only in the first prompts of the session. If the conversation lasts for a few interactions he starts to become chatty, examples:

a) I ask for an opinion on a code approach it generates a lot of code in addition to writing an article.
b) I ask about 1 line of a function, he rewrites everything again.
c) I reprimand him, instructing him not to repeat it, and he rewrites the entire answer, summarizing it again, without need.

But this goes beyond programming. The same thing happens on different subjects, he tends to write in the form of articles with lists, bullet points, introduction, final considerations, etc.

And if the session starts to get long he starts not even obeying me anymore. Then I start having to keep getting his attention with each answer as if he were a stubborn child, like “stop regenerating this, just answer the last point” :joy:, to regenerate shorter answers with no need. Or start a brand new chat loosing all context. :weary:

Does this model only work well with short sessions? :fearful:

PS: I didn’t have problems at the beginning of the model 4’s life or right after the launch of the 4o. Do these models tend to get worse over time?

2 Likes

Yes. From my experience it does not do well at instructions following, following the context, and loves to repeat itself.

1 Like

I noticed my ChatGPT starting with 3.5 models selected for every new session last days, so I have to manually change to 4o. Service usage doesn’t seem to have increased last weeks according to specialized stats portals…

I rememeber I noticed a drop in the quality of the “model 4” weeks before the release of the 4o.

I also noticed a drop in the quality of MS Bing Chat (now Copilot) and their Dall-e generations before major updates around that same time.

Which raises a big question for me: do these models get worse with usage? Or are companies making things worse on purpose to build hype for adoption of the next update?
:man_shrugging:

1 Like

Why would you say that. Now it’s giving very short responses, it used to reach 2000 words with its responses and now it’s trying so hard to shorten them!!

This was Jun 26.
They changed a lot since then.
Nowadays, my experience is like this with 4o:

In a new chat, I ask something and the first response is always short, no matter how much context I provide or how much I ask for a long response. So I have to wait for that first response, which I already expect to be short, and then I ask in a second iteration for it to be redone, insisting on the request for a longer version. Only then it delivers what I need.

After this last post in June, in future updates, the model began to respect the default setting to generate short responses.

However, today it is the opposite of when I started this discussion: it insists on giving a short response to the first question, and only afterward does it follow the instructions to generate a longer response.

At least now it’s better, since mostly I do need short responses anyway. It doesn’t bother me to ask for them to be redone to be longer when that’s necessary.

1 Like