We’re in useage tier 5 and got this error with the Batch API:
“Enqueued token limit reached for gpt-4o-mini in organization X. Limit: 1,000,000 enqueued tokens. Please try again once some in_progress batches have been completed.”
Getting the same error though we are in tier 5. I see several issues reported on this topic…anyone from OpenAI monitoring this and could they provide some suggestion?
I’m experiencing a similar issue. My organization is on Tier 5, yet occasionally, even a single batch fails due to exceeding the 1,000,000 enqueued token limit. Interestingly, when I retrigger the same batch a few hours later, it processes without any problems and doesn’t encounter any rate-limiting constraints. This error consistently seems to occur around midnight. Does anyone know if OpenAI imposes any time-based system constraints that might explain this behavior?