Enqueued token limit reached

We’re in useage tier 5 and got this error with the Batch API:
Enqueued token limit reached for gpt-4o-mini in organization X. Limit: 1,000,000 enqueued tokens. Please try again once some in_progress batches have been completed.

Is this a bug or do we misunderstand the limits?

2 Likes

Getting the same error though we are in tier 5. I see several issues reported on this topic…anyone from OpenAI monitoring this and could they provide some suggestion?

Doesn’t anyone care or answer this question? I’m on tier 5 and still can only do one batch task at a time.

I’m experiencing a similar issue. My organization is on Tier 5, yet occasionally, even a single batch fails due to exceeding the 1,000,000 enqueued token limit. Interestingly, when I retrigger the same batch a few hours later, it processes without any problems and doesn’t encounter any rate-limiting constraints. This error consistently seems to occur around midnight. Does anyone know if OpenAI imposes any time-based system constraints that might explain this behavior?

1 Like

Same here, did you ever find a solution to this ?

I’m having this problem as of April 2025.

I have no other batch jobs running and yet I’m told:

Please try again once some in_progress batches have been completed.

Has this been confirmed to be a bug by anyone anywhere? Is there a solution?

I’m having this problem as of May 2025. I’m waiting for any request to complete before sending the next one. Even the smallest request gets rejected. I created a new API key and it’s working for now. We’ll see how long that lasts. Worst case I can create multiple API keys.

Nope. Weird. I believe it’s a bug. Changing API Keys worked for a very short time before stopping. I have no batches in progress and can’t submit any without errors. I’ll wait 24 hours before giving up.

I have the same issue. API always reject any requests even I gradually changed it to a smaller number of sample which definitely not beyond the limit.

Update: I tried to create different keys but it still not works. When I tried another account, everything works smoothly. I think the potential reason is that either the enqueued token limit takes account into the tokens of failure batch. Just a guess at this moment.

1 Like

I kind of believe there is a bug for calculating the enqueued toekns for batch api.

Background: I sliced my data equally to 152 samples per slice and submit the batch one by one. I checked the average token usage (in+out) from finished batch is ~1050/sample. I tried it with a Tier 1 account and using GPT-4.1-mini.

First time it failed, I tried to resubmit that slice and it works. But when I submitted the next one, it always failed. It is very frustrating that you don’t know what is happening, and you have to wait 24h to continue.

That happens not the first time and I try to understand how to avoid it.

1 Like

I ran into this before. I believe that it’s assuming all of your max outputs tokens will be used.

I got the same error. I have tried all day, changed api key but all batches fail. I don’t have any batches pending so I don’t understand what is going on. I am in Tier 1 at the moment and evaluating if to upgrade to higher Tier with my organization, but I see people in Tier 5 having the same issue. Can someone help here?

I had the same problem.
For my case, it was that my batch file submission is using too much tokens (tokens as in NLP tokenization). Cutting it short solves my case.
You can use the following openai site to check count your tokenization.
| https://platform.openai.com/tokenizer

I’ve had this exact problem since last year around december, and it still persists.

Currently we are in Tier 2 with a queue limit for batch of 20,000,000 tokens, and right now I’m running two batch jobs with 200,000 tokens each (I’m tracking this information for every job), and on creating a third job with 200,000 tokens as well, I get the error:

Enqueued token limit reached for gpt-4o-mini in organization xxx. Limit: 20,000,000 enqueued tokens. Please try again once some in_progress batches have been completed.

I’m fully convinced there is a bug in their calculation for batch jobs but it is impossible to reach OpenAI’s team.

I have the same issue as well.
It says I’ve used all the tokens. When I query thru the API for number of enqueued tokens, I get “0”. However, when I submit the next batch, I get an error that the enqueued token limit has been exceeded. Clearly it isn’t, but even after waiting a while, I don’t seem to be able to submit more jobs.

Hello! We're currently investigating this issue and will share updates as soon as we have more information.

2 Likes

Hello! Could you please share the batch IDs so we can investigate this further?

batch_688a362161f08190b6aa3f9d4086f1a5
batch_688a361048f8819097294b27f12b3ce0
batch_688a35ff3970819095ef6944565968c2
batch_688a36ed5c1c8190ad49444728038544
batch_688a35dca62881908a9b7be7d3015441
batch_688a33c2752c8190a510f21050f828c6

and many more

then, in between all those fails, I’ll have successes:
batch_688a36326ea88190858b95e1d70b6aee
batch_688a33d37fb48190a0795991603dcad6

Hope that helps.

Thanks!

1 Like

I tried to change api key and it did not work. When I changed (added) job metadata, the exception stopped to show.

job = client.batches.create(
    input_file_id=file_id,
    endpoint=endpoint,
    completion_window="24h",
    metadata={"key": "value", "added_key": "added_value"},
)
1 Like

@erickkrueger Did you recently upgrade to tier 2? If so, when?

Can other users provide batch IDs as well? We would like to understand if this is specific to accounts or a generic issue?