I just started using the Batch API and I’m getting the 403 Error that my project does not have access to the model. Reading other reports on this problem I already tried different models (4o-mini and 4.1-mini) and created a new project with a new key and retestet both models. I’m nowhere near any limits and even after removing them I still get the error. Any ideas?
Does your normal requests (without batch) work normally?
yes normal request work. The error occurs only with the batch api
Have you tried the examples in the cookbook?
Yes I just tried the cookbook examples. After the batch_job is completed I get this:
Batch(id=‘batch_xxxxxxx’, completion_window=‘24h’, created_at=x, endpoint=‘/v1/chat/completions’, input_file_id=‘file-xxx’, object=‘batch’, status=‘completed’, cancelled_at=None, cancelling_at=None, completed_at=xxx, error_file_id='file-xxxx, errors=None, expired_at=None, expires_at=xxx, failed_at=None, finalizing_at=xxx, in_progress_at=xxx, metadata=None, output_file_id=None, request_counts=BatchRequestCounts(completed=0, failed=1000, total=1000))
The error_file still shows the 403 Error for each request:
{“id”: “batch_req_xxx”, “custom_id”: “task-0”, “response”: {“status_code”: 403, “request_id”: “xxx”, “body”: {“error”: {“message”: “Project xxx
does not have access to model gpt-4o-mini-2024-07-18-batch
”, “type”: “invalid_request_error”, “param”: null, “code”: “model_not_found”}}}, “error”: null}
I checked again but all used models in the cookbook are in the allowed models list
(for my normal project and also the new project I tested because of the error)
at least client.models.list() shows ‘gpt-4o-mini’. The -batch version is as far as I understand only an alias added by the api for billing
Strange indeed.
One more thing that you could perhaps try before reaching support, would be to go to your organization settings > project (down below, not the first one) > limits > and check the model in “Configure which models can be used in this project.”
You may need to create a new project and API key if you continue having this issue. The typical symptom needing such a “reset” is that when you look at the error message being returned, the actual internal model being complained about ends with an appended -batch
to the model name.
You can also try the full versioned name, such as gpt-4.1-2025-04-14