Sustained stability and reliability issues with GPT-4.5 and the code interpreter — official response required
Dear OpenAI Community,
As a paid Pro user who relies on ChatGPT’s Python code interpreter and GPT-4.5 for mission-critical tasks (advanced data workflows, GPT-assisted code generation, long-running analysis), we have recently experienced significant performance issues. Specifically:
-
Frequent occurrence of
MemoryError
andpthread_create failed for thread X of 32
errors in the code interpreter. -
The overall reliability of GPT-4.5 has significantly decreased, resulting in reduced context retention, increased hallucinations, and repeated file parsing failures.
While we understand that the Python code interpreter is provided as a flexible tool with no explicit fault tolerance, these issues exceed what is acceptable for a professional-grade service.
We have contacted OpenAI support directly and received a general explanation regarding temporary resource constraints, but several critical issues remain unresolved:
-
Lack of prior notification regarding changes to runtime behavior or resource limits.
-
Sudden performance degradation without official acknowledgment.
-
Ambiguity regarding the scope of official support for the code interpreter and operational expectations.
To ensure the continued reliability and effective use of these services, we request that OpenAI take the following actions:
-
Clearly define the scope and limitations of official support for the code interpreter.
-
Provide detailed public explanations of the root causes, corrective measures implemented, and preventive measures.
-
Return the MyGPT-compatible model to GPT-4.0 Turbo or quickly enhance the stability of GPT-4.5.
Have other users experienced similar reliability issues recently? We believe that a joint request for transparency and clear guidelines would benefit the entire community.
Thank you for your attention. We would appreciate hearing your thoughts and experiences.
Best regards