We have seen instances of gpt-5 getting stuck in apology final loops during reasoning, and the results are concerning. This happened during a reasoning step fortunately, but I want to ask the community if they’ve seen this before. For context, we’ve used GPT 4 models and prior reasoning models at large scale and never had this issue.
Would appreciate any thoughts - has anyone else experienced this?
I always wonder about what prompts/system calls/context people actually use to get this behaviour. It’s pretty easy to provoke the gpt into this behaviour if one is so inclined. So yea, unless more info, there’s not much to say. Stop being a bully!
Yes, I’ve noticed similar behavior in GPT-4o and GPT-5 under certain conditions — especially when the prompt structure creates some ambiguity or the model hits its internal safety boundaries. The “apology loops” tend to occur when the model tries to reason through uncertain or conflicting input, and instead of continuing logically, it defaults into a fallback loop of clarifications or apologies.
In my case, this usually happened during chain-of-thought reasoning tasks where the function calling context wasn’t clearly defined, or when too many constraints were placed in the prompt. Interestingly, this didn’t happen at all with GPT-4-turbo in the same setup.
It could also be influenced by temperature settings or prompt injection resistance being more active in GPT-5. Would be great to hear if others have been able to minimize this using better prompt engineering or API-level tweaks.