Batch API is not supported for o3-pro, despite claiming to be supported on model card

The model card for o3-pro currently says it supports Batch APIs. https://platform.openai.com/docs/models/o3-pro.

When trying to use the o3-pro in the batch API, however, I get the following response:

Batch(id='batch_684a4bc5b0ac81909a63f5b8e20c4809', completion_window='24h', created_at=1749699525, endpoint='/v1/responses', input_file_id='file-TYJs2sKihpccEu5t3CTS9W', object='batch', status='failed', cancelled_at=None, cancelling_at=None, completed_at=None, error_file_id=None, errors=Errors(data=[BatchError(code='model_not_found', line=1, message="The provided model 'o3-pro' is not supported by the Batch API.", param='body.model'), BatchError(code='model_not_found', line=2, message="The provided model 'o3-pro' is not supported by the Batch API.", param='body.model')], object='list'), expired_at=None, expires_at=1749785925, failed_at=1749699527, finalizing_at=None, in_progress_at=None, metadata={'description': 'nightly eval job'}, output_file_id=None, request_counts=BatchRequestCounts(completed=0, failed=0, total=0))

Here is the code I am using to query the Batch API:

from openai import OpenAI
import time
client = OpenAI()

batch_input_file = client.files.create(
    file=open("batchinput.jsonl", "rb"),
    purpose="batch"
)

batch =client.batches.create(
    input_file_id=batch_input_file.id,
    endpoint="/v1/responses",
    completion_window="24h",
    metadata={
        "description": "nightly eval job"
    }
)
print("ID: ", batch.id)

while True:
    batch = client.batches.retrieve(batch.id)
    print(batch)
    if batch.status == "completed" or batch.status == "failed":
        break
    time.sleep(5)

if batch.output_file_id:
    print("File Output:")
    output_file = client.files.content(batch.output_file_id)
    print(output_file.text)

if batch.error_file_id:
    print("File Error:")
    error_file = client.files.content(batch.error_file_id)
    print(error_file.text)

And here is the jsonl file:

{"custom_id": "request-1", "method": "POST", "url": "/v1/responses", "body": {"model": "o3-pro", "input": [{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Hello world!"}],"max_output_tokens": 10000}}
{"custom_id": "request-2", "method": "POST", "url": "/v1/responses", "body": {"model": "o3-pro", "input": [{"role": "system", "content": "You are an unhelpful assistant."},{"role": "user", "content": "Hello world!"}],"max_output_tokens": 10000}}

I have verified this code works for other models like GPT-4o or o1. Using o3-pro-2025-06-10 produces the same behavior.

Same issue here. It’s a blocker from running a large set of tests on it.

Hello! Thanks for flagging this—our team is investigating, and early checks show Batch support for o3-pro is still rolling out behind an access gate. It should land for all orgs soon; we’ll update the docs and follow up here once it’s fully enabled.

2 Likes

Hello! We have lifted the flag and all users should be able to access o3-pro for batch APIs.

3 Likes

@OpenAI_Support seems like it still has issues:

BatchError(code='model_unsupported', line=1, message="The provided model 'o3-pro' with endpoint '/v1/chat/completions' is not supported by the Batch API.", param='model')

Any idea when chat completions support will land?

Hello! I wasn't able to reproduce the issue with chat completions API. Please reach out to our support team support@openai.com as this issue seems specific to your account. We will close this thread as solved.

Amazing! No problems for you using O3-Pro with Chat Completions?

The part where it says “o3-pro is available in the Responses API only” makes me believe that this “OpenAI support” account that can only write “reach out to our support team” did not in fact attempt to violate the documentation and use o3-pro with chat completions and the batch endpoint.

Let us know if in fact you proved that o3-pro does not require Responses API endpoint to function, and if so, release it everywhere.