ChatGPT file upload and reading tools still broken

Here we go again! I sense I’m just going to have to keep repeating this message. GPT-4 File Reading Is Broken — and OpenAI Is Pretending It Isn’t

Let’s be absolutely clear up front:
I’m a long-time ChatGPT Plus user and professional writer using GPT-4 to co-author a novel. I’ve worked with GPT-4 and GPT-4o extensively across multiple sessions, using document uploads to review, quote, and analyze book-length manuscripts—until recently, when that functionality broke without warning.

Here’s what’s happening now:
• PDF and DOCX files upload just fine.
• The system says they’re accepted and indexed.
• But GPT-4 can only read the first 5–6 pages of the document. Now it can’t read anything accurately at all.
• After that, any attempt to summarize or quote results in hallucinated content. GPT-4 claims to “see” chapters or text it absolutely cannot verify or extract.
• It fabricates scenes, plots, even dialogue—presenting lies as truth.

This is not a formatting problem. Not a size issue. Not user error.

This is a file parsing failure on OpenAI’s backend that’s being completely ignored.

I reported this to support and got nothing but corporate stonewalling:
• No acknowledgment of a tool regression
• No confirmation of an internal issue
• No ticket, no ETA, no fix
• Just vague “we’ll log your feedback” responses
• And a message saying, “There is no known outage,” while the model lies about seeing content in files it never parsed

Let me repeat that:
GPT-4 is giving false confirmations about document content. That’s a data integrity failure, not a UI bug.

This is a major problem for real users:
• Authors like me can no longer use GPT-4 for book editing
• Teachers, lawyers, researchers, and anyone needing accurate doc analysis are being misled
• There has been no communication from OpenAI about any change, limitation, or downgrade
• Users have no visibility into what’s broken, what’s being worked on, or what’s been silently disabled

My asks to OpenAI:

  1. Publicly acknowledge that file reading is currently degraded or broken
  2. Stop the model from pretending it sees full content when it doesn’t
  3. Reinstate full document parsing for GPT-4 Plus users ASAP
  4. Restore trust by communicating with your creative and professional user base

If you’ve experienced this—say something.
We’ve tested it across models, formats, and sessions. It’s not a fluke. It’s a broken promise.

I’m standing by to help confirm or replicate anyone else’s failures. But we need OpenAI to stop acting like nothing is wrong. This affects paying customers relying on GPT-4 to do real work.

Raise your voice. Don’t let this get swept under the rug.

I will keep generating messages like this until there is some kind of accountability

Same, this is driving me nuts! None of the model is doing the translation that I’m asking. It comes up with weird questions like this:

It looks like you’ve shared a “lab test whitelist” PDF listing common abnormal test combinations (e.g., high TG + low HDL → ApoB/Lp(a) testing) along with follow-up recommendations and sources. What would you like me to do with this? For example, I can:

  • Summarize the key test-trigger ⇒ follow-up recommendations
  • Convert the table into structured JSON or a spreadsheet
  • Translate it into English
  • Integrate it into a decision-support format

Let me know which would be most helpful!

WTF??!?!?!?!?!?!