Experiencing "Request Timed Out" Errors with gpt-4o-mini Despite Low Usage

Hello OpenAI Community,

I’m encountering consistent “Request timed out” errors when using the gpt-4o-mini model via the API. Here’s some context:

  • Tier: Tier 4
  • Model: gpt-4o-mini
  • Usage: Approximately 400 RPM, well within the limits of 10,000 RPM and 10,000,000 TPM
  • Purpose: Processing large text payloads to identify brands; reducing payload size isn’t feasible due to the nature of the task
  • Response Handling: Not using streaming responses, as the application aggregates data for statistical analysis

I’ve ensured that the request payloads are within token limits and have adjusted client-side timeout settings to accommodate longer processing times. Despite these measures, the timeouts persist.

Has anyone else faced similar issues with gpt-4o-mini under these conditions? Any insights or suggestions would be greatly appreciated.

Thank you!

I m facing similar issue, unable to figure out why is that happening. Are you able to resolve it?

Are you getting response from GPT in json format?

I’ve also been getting a lot of timeouts, starting on June 3rd. This is really disrupting service.

put your large text data dumps in a .txt file and upload instead of adding to direct message that should help