Hello OpenAI Community,
I’m encountering consistent “Request timed out” errors when using the gpt-4o-mini
model via the API. Here’s some context:
- Tier: Tier 4
- Model:
gpt-4o-mini
- Usage: Approximately 400 RPM, well within the limits of 10,000 RPM and 10,000,000 TPM
- Purpose: Processing large text payloads to identify brands; reducing payload size isn’t feasible due to the nature of the task
- Response Handling: Not using streaming responses, as the application aggregates data for statistical analysis
I’ve ensured that the request payloads are within token limits and have adjusted client-side timeout settings to accommodate longer processing times. Despite these measures, the timeouts persist.
Has anyone else faced similar issues with gpt-4o-mini
under these conditions? Any insights or suggestions would be greatly appreciated.
Thank you!
I m facing similar issue, unable to figure out why is that happening. Are you able to resolve it?
Are you getting response from GPT in json format?
I’ve also been getting a lot of timeouts, starting on June 3rd. This is really disrupting service.
put your large text data dumps in a .txt file and upload instead of adding to direct message that should help
Any update on this?
I have only managed to get 4o-mini to work a few times through the API. The persistent “Request timed out” issue has been so frequent that I’ve ultimately given up on using 4o-mini via the API altogether.
I have implemented internal retry mechanism, that’s how i’m handling it, Open AI support suggested me this.
Other then that, i think batch API is also and good option if you have relative use-case.
Also, someone suggested we should upload the large message in .txt file and then send in api request. i will try it and update here later.