hi
I am using Batch API (or https://platform.openai.com/batches/) with gpt-4o-mini and I do get a token limit error
Enqueued token limit reached for gpt-4o-mini in organization org-****. Limit: 2,000,000 enqueued tokens. Please try again once some in_progress batches have been completed.
what I do not understand is that the same batch did pass couple of hours ago.
I thought that the limit was for ‘enqueued tokens’ (as explain in the error message) meaning that if nothing else is running it should be ok.
This limit should not be per day or so as as I understood.
can someone explain me what’s wrong.
1 Like
I get the same error message for gpt-4.1-nano. It is confusing, in the limits page there is 2kk tokens per batch, but it seems that the limit 2kk TPD
I have the same issue. My batches fail immediately, they don’t even validate. I am trying to run them from Python code. Some of my batches are
batch_68c4611cae3c8190b99e872838e0715c
batch_68c45fb62564819094252ef20182546a
Lastly, I have some batches > 30 days that are still there e.g.,
batch_685a3f4a716081908a915e628b46a6a5
UPDATE:
I corrected any errors I found in my request files, and I reduced the size of my files to account for output tokens as well.
However, to not avail. It seems that whether I submit requests via code or the dashboard, my batches are randomly accepted/rejected.
I have this batch batch_68c5140f503c8190bf6384299dbccddc, 16 MB and it was rejected right away.
Any help would be much appreciated.