Since yesterday, I am suddenly experiencing quite a few issues with GPT5:
the quality degraded noticeably. Things that were running fine for weeks, suddenly contain errors, requiring me to tune my prompts to mitigate these
It suddenly makes quite stupid mistakes, like: if you move your desk to the front wall, the distance from the back wall decreases.
I am seeing issues where the api returns an error, because the previous response id is not known, while I received that id previously from the api. A retry usually fixes it.
I have seen quite a few cases where I am suddenly being returned an empty response. If I ask what went wrong, the model just replies: “Sorry, That was a glitch on my side.”
Yeah, I’ve noticed a bit of that too over the last day or so. Some of my previously stable prompts started giving weird or inconsistent results — small logical slips and occasional empty responses.
It feels like something might’ve changed on the backend (maybe a silent update or model refresh). The API error about “unknown response id” also popped up for me a few times, so it’s probably not on your end.
I’d suggest keeping an eye on the OpenAI status page
and the API changelog — they sometimes roll out updates that cause temporary instability. A quick retry usually fixes it, but yeah, you’re not the only one seeing odd behavior.
Ditto. We have been experiencing the exact same issues - sometimes getting no response which has never happened before. Also, instruction prompts are sometimes being ignored.
Have been using gpt-5-mini with the web search tool in the response api. We are also getting replacement characters, i.e., a black rhombus (diamond) containing a white question mark, in the HTML response.
Pretty much Same on me.. i mean it used to be reliable and good enough but then it easily become unrealiable and sometimes i generate same topic over and over again