GPT-4o is stuck in a loop and unusable

I just have to search for my own naming of this “stuck in a loop”, which you’ll discern below, where the length of a ChatGPT conversation quickly degrades the (post-devday cheapening) AI, and the model starts referring back incorrecly through its narrow vision to past messages instead of seeing the forest and a progression leading up to the latest input’s needs.

ChatGPT doesn’t have the “forgets what you said after 3 turns” of chat management any more - but maybe it SHOULD have that option.

Fortunately there’s (until June 2025?) API GPT-4 that hasn’t been hit badly.

o1 models are showing to be far worse in this; might as well just paste into a new session for every question and iteration. An o1-xxx “regenerate” in ChatGPT is generally better than what another model responded at a particular point, but you only get one go at good quality.

Someone needs to make a “long conversation” benchmark, instead of a “can answer one question” benchmark that AI companies compete with and tout…