Enqueued token limit reached

We’re in useage tier 5 and got this error with the Batch API:
Enqueued token limit reached for gpt-4o-mini in organization X. Limit: 1,000,000 enqueued tokens. Please try again once some in_progress batches have been completed.

Is this a bug or do we misunderstand the limits?

Getting the same error though we are in tier 5. I see several issues reported on this topic…anyone from OpenAI monitoring this and could they provide some suggestion?

Doesn’t anyone care or answer this question? I’m on tier 5 and still can only do one batch task at a time.

I’m experiencing a similar issue. My organization is on Tier 5, yet occasionally, even a single batch fails due to exceeding the 1,000,000 enqueued token limit. Interestingly, when I retrigger the same batch a few hours later, it processes without any problems and doesn’t encounter any rate-limiting constraints. This error consistently seems to occur around midnight. Does anyone know if OpenAI imposes any time-based system constraints that might explain this behavior?