I am facing a weird issue when I use the OpenAI API to run batch processing. The batching process has been in ‘finalizing’ status for days. I have turned to Help Center, and here is what they reply:
Make a POST request to the following endpoint: https://api.openai.com/v1/batches/{batch_id}/cancel
Replace {batch_id} with the ID of the batch you want to cancel.
Once the cancellation request is made, the batch’s status will change to cancelling. It may take up to 10 minutes for the batch to fully cancel, after which the status will change to cancelled.
However, I am not sure how to make a POST request, while I have tried:
The error message shows that the advice you previously received was in error.
You must inform support@openai.com that this issue cannot be resolved by a bot trained to give hypothetical nonsense in hopes that someone gives up and shuts up.
Instead, that resolving this will take OpenAI staff action on your organization, releasing the call results and/or refunding the charges, and bug fixing of the the batch endpoint so such an API call performs correctly always in the future.
It’s so hard to reach out to their human staff in the Help Center. I have talked with two staff members while they provided me with different solutions. The first one told me to send a cancel request, just like what I posted. The second one asks me to resubmit the batch processing request, which looks unreasonable to me, as those old ones will block the new request.
The batch IDs are ‘batch_68536b39454c8190b4582ddab168efdc’ and ‘batch_685317a731648190aae4e04b127ea2d7’. They are under ‘finalizing’ for a couple of days.
Hi, we’ve identified the issue - the output file sizes were above our internal file size limits. We have since raised them so this issue should not happen again. If you re-submit these batches they should complete successfully. Sorry for the trouble!
Could you please confirm whether they have been killed or are still being processed?
Also, I’m a bit puzzled—I’m using the same code and input data (which only includes about 1,200 sentences), with the only difference being the model type. Why is it that only the gpt-41-mini model is encountering this issue?