Since the March 18 update, I’ve noticed that all of my custom GPTs no longer seem to be using the knowledge files I’ve uploaded. Where they previously provided accurate, source-based answers clearly aligned with the uploaded content, they now appear to hallucinate and respond with information that doesn’t match what’s actually in the files.
This issue becomes very clear when asking a GPT to directly quote or summarize a specific section or page from one of the knowledge files. Instead of returning the actual content, the GPT offers made-up or inaccurate responses that don’t reflect the file’s content at all. It seems as if the model is either unable to read the knowledge files correctly—or is simply ignoring them.
This is a major issue, especially if you rely on custom GPTs for documentation, training materials, internal guides, or any file-based expertise.
Is anyone else experiencing this? And has anyone found a workaround or explanation?
All my knowledge files are in PDF format.