We’re in useage tier 5 and got this error with the Batch API:
“Enqueued token limit reached for gpt-4o-mini in organization X. Limit: 1,000,000 enqueued tokens. Please try again once some in_progress batches have been completed.”
Getting the same error though we are in tier 5. I see several issues reported on this topic…anyone from OpenAI monitoring this and could they provide some suggestion?
I’m experiencing a similar issue. My organization is on Tier 5, yet occasionally, even a single batch fails due to exceeding the 1,000,000 enqueued token limit. Interestingly, when I retrigger the same batch a few hours later, it processes without any problems and doesn’t encounter any rate-limiting constraints. This error consistently seems to occur around midnight. Does anyone know if OpenAI imposes any time-based system constraints that might explain this behavior?
I’m having this problem as of May 2025. I’m waiting for any request to complete before sending the next one. Even the smallest request gets rejected. I created a new API key and it’s working for now. We’ll see how long that lasts. Worst case I can create multiple API keys.
Nope. Weird. I believe it’s a bug. Changing API Keys worked for a very short time before stopping. I have no batches in progress and can’t submit any without errors. I’ll wait 24 hours before giving up.
I have the same issue. API always reject any requests even I gradually changed it to a smaller number of sample which definitely not beyond the limit.
Update: I tried to create different keys but it still not works. When I tried another account, everything works smoothly. I think the potential reason is that either the enqueued token limit takes account into the tokens of failure batch. Just a guess at this moment.
I kind of believe there is a bug for calculating the enqueued toekns for batch api.
Background: I sliced my data equally to 152 samples per slice and submit the batch one by one. I checked the average token usage (in+out) from finished batch is ~1050/sample. I tried it with a Tier 1 account and using GPT-4.1-mini.
First time it failed, I tried to resubmit that slice and it works. But when I submitted the next one, it always failed. It is very frustrating that you don’t know what is happening, and you have to wait 24h to continue.
That happens not the first time and I try to understand how to avoid it.
I got the same error. I have tried all day, changed api key but all batches fail. I don’t have any batches pending so I don’t understand what is going on. I am in Tier 1 at the moment and evaluating if to upgrade to higher Tier with my organization, but I see people in Tier 5 having the same issue. Can someone help here?
I had the same problem.
For my case, it was that my batch file submission is using too much tokens (tokens as in NLP tokenization). Cutting it short solves my case.
You can use the following openai site to check count your tokenization.
| https://platform.openai.com/tokenizer
I’ve had this exact problem since last year around december, and it still persists.
Currently we are in Tier 2 with a queue limit for batch of 20,000,000 tokens, and right now I’m running two batch jobs with 200,000 tokens each (I’m tracking this information for every job), and on creating a third job with 200,000 tokens as well, I get the error:
Enqueued token limit reached for gpt-4o-mini in organization xxx. Limit: 20,000,000 enqueued tokens. Please try again once some in_progress batches have been completed.
I’m fully convinced there is a bug in their calculation for batch jobs but it is impossible to reach OpenAI’s team.
I have the same issue as well.
It says I’ve used all the tokens. When I query thru the API for number of enqueued tokens, I get “0”. However, when I submit the next batch, I get an error that the enqueued token limit has been exceeded. Clearly it isn’t, but even after waiting a while, I don’t seem to be able to submit more jobs.