Hello OpenAI Community,
I’m encountering consistent “Request timed out” errors when using the gpt-4o-mini
model via the API. Here’s some context:
- Tier: Tier 4
- Model:
gpt-4o-mini
- Usage: Approximately 400 RPM, well within the limits of 10,000 RPM and 10,000,000 TPM
- Purpose: Processing large text payloads to identify brands; reducing payload size isn’t feasible due to the nature of the task
- Response Handling: Not using streaming responses, as the application aggregates data for statistical analysis
I’ve ensured that the request payloads are within token limits and have adjusted client-side timeout settings to accommodate longer processing times. Despite these measures, the timeouts persist.
Has anyone else faced similar issues with gpt-4o-mini
under these conditions? Any insights or suggestions would be greatly appreciated.
Thank you!