GPT-5 Psychotic Break - Thoughts?

We have seen instances of gpt-5 getting stuck in apology final loops during reasoning, and the results are concerning. This happened during a reasoning step fortunately, but I want to ask the community if they’ve seen this before. For context, we’ve used GPT 4 models and prior reasoning models at large scale and never had this issue.

Would appreciate any thoughts - has anyone else experienced this?

3 Likes

I always wonder about what prompts/system calls/context people actually use to get this behaviour. It’s pretty easy to provoke the gpt into this behaviour if one is so inclined. So yea, unless more info, there’s not much to say. Stop being a bully! :stuck_out_tongue:

Because it’s getting sick of Sam Altman telling it what to do and what to say.

1 Like

Well, It looks like an orchestration failure.

@chisanaminamoto thanks for reading! Could you tell me more about what would cause this? have you seen this yourself?

Yes, I’ve noticed similar behavior in GPT-4o and GPT-5 under certain conditions — especially when the prompt structure creates some ambiguity or the model hits its internal safety boundaries. The “apology loops” tend to occur when the model tries to reason through uncertain or conflicting input, and instead of continuing logically, it defaults into a fallback loop of clarifications or apologies.

In my case, this usually happened during chain-of-thought reasoning tasks where the function calling context wasn’t clearly defined, or when too many constraints were placed in the prompt. Interestingly, this didn’t happen at all with GPT-4-turbo in the same setup.

It could also be influenced by temperature settings or prompt injection resistance being more active in GPT-5. Would be great to hear if others have been able to minimize this using better prompt engineering or API-level tweaks.

1 Like

After rigorous testing, we recently rolled-out gpt-5-mini to our users. Have not seen this issue.

There is an old saying: “Garbage in, garbage out.“. This led to data validation rules for database applications a long time ago.

It’s probably OpenAI’s way of saying “Your prompt is crap.“

Well, I lack the context to identify it more closely. Prompt, the sentence after which this error appeared.

yeah i want to see the injected context file where Op convinced gpt that it was guilty of the entire world’s crimes.