How to handle batch API limit?

BatchError(code=‘token_limit_exceeded’, line=None, message='Enqueued token limit reached for gpt-3.5-turbo-0125 in organization …

One solution is to split the jsonl file that holds the prompts into multiple files, where each file doesn’t exceed max TPD, and batch these files one by one.

Hi,
I’m getting the same error. Is the token limit just for content in the messages part or do we have to consider, custom-id, model, and type for the token limit as well?

I have 1350000 limit for gpt-4-turbo. But every file I upload gives enqueue token limit reached

I am very confident that this is a bug on OpenAI’s side. It undermines the main selling point of “batch processing”.

1 Like

There’s a new limit in the docs: 50k lines per file.