Hi!
First time here.
I use some custom assistants directly in the web interface. I built them through the assistant in “My GPTs.” They worked perfectly until last Thursday, May 29th. Starting in the afternoon, around 3 p.m., all of my assistants began hallucinating in absurd ways, inventing information whenever I provide a document for analysis (it can be a 10-page document—they completely hallucinate).
And this has been happening exactly since that date, for about four days now. I use ChatGPT Plus; I don’t think this is related to the subscription, but rather to some internal error at OpenAI. I’ve already disabled memories, created new assistants, and they continue hallucinating so badly that they’re useless.
A little before that, I recall that all of the assistants were being flagged as violating OpenAI’s rules, something that had never happened before.
Unfortunately, it’s very difficult to get in touch to report these errors—you end up stuck in circles with the automated support.
Is anyone experiencing the same problems?
5 Likes
Same here. Hallucinations are constant, and no amount of prompting is getting rid of it. Started at the same time as you.
4 Likes
It was bad since beginning of may, but since the last days it is nearly unuseable. I worked so good in february and march, i miss it
Now it is literally broken.
4 Likes
My chat uses its memory only by 50%. Some facts are correct, some are made by itself. It’s frustrating, honestly. I cleared the memory, separated the information by chunks—still nothing.
2 Likes
It is absurd that, up to now, the problem hasn’t even been recognized.
1 Like
I have the same issue with hallucinations. I have saved a few memories but it is not full yet. Still it can’t remember everything from the memories. It claims it has no knowledge of the topic or starts to hallucinate something on its own.
1 Like
The issue isn’t just about memories. In my case, the assistant should have simply followed a prompt, as it did perfectly until the morning of May 28. It has completely lost any ability to understand what it’s doing: it swaps numbers, makes up data and information… it’s awful to discover this during a product presentation. And total silence from OpenAI. Time to start looking for other solutions.
2 Likes
I’ve been having the same issues this week.
I’ll submit a conversation transcript, and it will just make up things that were never said, or completely lose track of the chronological order.
It wasn’t an issue until this week.
2 Likes
Same here. The hallucination problem is wild. It is unable to provide feedback on a file & keeps making up stuff that doesnt exist. Horrible.
1 Like
I canceled my subscription. That’s it. I’m going to try Perplexity Pro for day-to-day use and explore other options for building custom agents.