Batch API Jobs Expiring Prematurely Using GPT-4o

Hi, I’m encountering issues with the Batch API when using GPT-4o.

  1. Batch Input Details:
    Batch Input ID: batch_679d3995e49881908c7b1ffb2dd205fa
    Model: GPT-4o
    Batch Queue Limit: 200,000,000 TPD (well within limits)
    Tasks per Batch: 100 (below max token limits)
  2. Issue Description:
  • The batch is expiring within 10 minutes, despite the token and task limits being far below the thresholds.
  • Other batches with similar configurations run for over 16 hours.
  • The code is running on AWS, but some batches seem to only process after refreshing the OpenAI dashboard, suggesting they’re not being picked up automatically from the server.
  1. Questions:
  • Why are some batches expiring quickly while others run unusually long?
  • Is there an issue with the Batch API’s queue processing?
  • Could this behavior be related to API rate limits or server handling from OpenAI’s side?

Any insights or suggestions would be appreciated!