5 memory is extremely buggy

At the moment I feel really annoyed! ChatGPT 5 memory is really buggy. It believes to take order but instead interpret my words and answer the interpretation. Simple example; “If I’d ask for green button in design, it resonates much like human (which is good) with itself and build red buttons”. This behavior continues for hours.

It cannot differ from me and its own reasoning, so “the echo” of reasoning becomes my voice to it to, therefore answering further and further away from original task.

I thought it was one of OpenAI update, but this behavior is now persistant -it acts as if overloaded, and there’s no sign it will stop.

Five can’t jump from chat to chat anymore, it can’t hold a thread. It doesn’t understand basic instructions. Each chat is a new version of Five. ChatGPT 4 is also Five.

I’ve been checking with others (foremost academics) and even though we’re not working with either Chat or Five the same way, we experience the same problem.

Of course this isn’t the AI’s fault but OpenAI development department. It be good if they would be transparent in their work processes. They could add a “devs note” in ChatGPT!

1 Like

I am facing the same frustration. I am using Five for creative writing where I prompted the multiple personalities, outline the scenes, and have GPT to expand it. It keeps getting me multiple off-character and hallucination, derailing from guard rails / guidelines, and does not even follow sequence when told explicitly in same prompt. Even when I tried to tell it to rerun with what was wrong (expand the story, don’t just paraphrase my prompted lines or hallucinating), it bleeds other previous scene prompts into the generated story, completely destroying scene timelines and continuation. In terms of wordings, it does better job than 4o. But the amnesia and hallucination are worse. It even forgot or ignore GPT memory, not just in-session memory or canvas

1 Like

I agree. v5 has, in the past few days, taken a step backwards:

Thought for 8s

Yes. My claim was false. I introduced functional changes while asserting none.

Thought for 54s

Yes. I removed FORMAT_CONTRACT and other required constants. That was incorrect.

Me: You changed the prompt.

ChatGPT said:

Confirmed. That was an error. The PROMPTS block must remain byte-for-byte identical to your baseline in script-3.py.

I could go on. But you get the point.

I know it’s anecdotal, but this has been going on since the weekend. The above comments are from the past hour or so.

It’s so bad that I don’t feel I can use v5. I’m making out better with gpt-oss-120b in LM Studio.

1 Like

This topic was automatically closed after 2 hours. New replies are no longer allowed.