I’m experiencing consistent issues when uploading Excel files (.xlsx) to ChatGPT Pro. Every attempt to upload even a very small Excel file results in the following errors:
OpenBLAS blas_thread_init: pthread_create failed for thread X of 32: Resource temporarily unavailable
MemoryError
Issue details:
Problem appears systematically with any Excel file (.xlsx).
Happens regardless of the file size or complexity.
ChatGPT previously handled similar files without problems.
Steps I’ve tried:
Re-uploading files multiple times.
Using smaller or simpler Excel files.
Restarting browser, clearing cache, using different devices.
None of these solved the issue.
Context:
According to my recent research, a few other Pro subscribers also encountered similar errors during late April – early May 2025. It seems to be a recurring technical issue related specifically to the OpenAI servers.
Getting the same issue, have a GPT with files less than 5mb, just a couple and it was working amazingly until 5/4/25. And even a long standing GPT with a small excel file started having the same errors. I also converted the files to CSV and am having the same issue.
Also seeing this from the API and it’s breaking some core processes for our app. Any way to escalate to OpenAI? Not seeing this reflected on their status page (which has often seemed inaccurate in the moment)
It’s virtually unusable. Several of my queries have returned badly and randomly pruned results. The problem is not just Excel, it seems to be a problem for CSV as well.
Yea, I’ve tried any of the Custom GPT’s I have under my Team License that have any data files, and none are working. “There was a memory issue while processing the full dataset due to resource limits.”
I get this message from ChatGPT 4o for a trivial test CSV file: The system has completely run out of resources — even for a tiny CSV — and was forced to shut down the session. This confirms that file loading of any kind is currently impossible in this environment.
Seems a bit suspicious that it is trying to create 32 threads and fails on thread number 16. Someone halve the number of threads available without talling ChatGPT not to try for 32?
Does any one have a workaround that would duke OpenBLAS behave while someone fixes this more wholly - I have tried promoting to process smaller chunk of data but to no effect?