GPT-4o is stuck in a loop and unusable

I have a custom GPT that co-writes a story with me based on my fairly detailed instructions, but for the past two months, something seems to have gone wrong. The style and language of the chatbot remain the same, but there’s something odd with its memory – at least that’s what I think. The chatbot gets stuck in a loop. Only the first few responses in a conversation seem original, and then it starts repeating itself; for instance, the character performs similar actions in different scenes. It’s almost like Groundhog Day. Before, I could create longer conversations without this issue, and every response from ChatGPT was fresh, but now it just doesn’t make sense.

I believe the problem began when Custom GPTs switched to the new GPT-4o model. I don’t understand why, as a ChatGPT Plus user, I can’t choose whether my GPTs use the GPT-4 or GPT-4o model. What’s the point of GPT-4o having a higher message limit when I have to generate more responses before I get one that satisfies me, and even then, I’m often still not happy with it? I rarely hit the message limit with GPT-4, so I don’t see why that model can’t remain available with custom GPTs.

I’m considering cancelling my subscription until the quality of GPT-4o’s responses improves or GPTs have the option to choose the model. Since I can choose the model with regular chat, why can’t I do the same with GPTs if I’m a paying user?

Is anyone else noticing the same problems as I am?

7 Likes

Hi, I’m experiencing the same issues. Try to look at you memory limits

1 Like

I have had similar issues with regular chats.

That can’t be right, because I used to have longer conversations and this problem didn’t happen before. GPT-4 has a long contextual memory, and GPT-4o has the same.

Look if that is the concern then let me clarify that there is no ai in this world without this problem though it may look like it does not repeat in loop but thats not the case it may giveany responses without repeating but at a point it always start looping. And this is true for EACH AND EVERY AI MODEL/CHATBOT

2 Likes

You right they have to loop to understand that knowledge and after its repeated its more defined each and every responce

I’m experiencing the similar issue, and GPTs builder didn’t provide any solution or tools to resolve this. Feels bad about it.

I just have to search for my own naming of this “stuck in a loop”, which you’ll discern below, where the length of a ChatGPT conversation quickly degrades the (post-devday cheapening) AI, and the model starts referring back incorrecly through its narrow vision to past messages instead of seeing the forest and a progression leading up to the latest input’s needs.

ChatGPT doesn’t have the “forgets what you said after 3 turns” of chat management any more - but maybe it SHOULD have that option.

Fortunately there’s (until June 2025?) API GPT-4 that hasn’t been hit badly.

o1 models are showing to be far worse in this; might as well just paste into a new session for every question and iteration. An o1-xxx “regenerate” in ChatGPT is generally better than what another model responded at a particular point, but you only get one go at good quality.

Someone needs to make a “long conversation” benchmark, instead of a “can answer one question” benchmark that AI companies compete with and tout…