Batch API is not supported for o3-pro, despite claiming to be supported on model card

The model card for o3-pro currently says it supports Batch APIs. https://platform.openai.com/docs/models/o3-pro.

When trying to use the o3-pro in the batch API, however, I get the following response:

Batch(id='batch_684a4bc5b0ac81909a63f5b8e20c4809', completion_window='24h', created_at=1749699525, endpoint='/v1/responses', input_file_id='file-TYJs2sKihpccEu5t3CTS9W', object='batch', status='failed', cancelled_at=None, cancelling_at=None, completed_at=None, error_file_id=None, errors=Errors(data=[BatchError(code='model_not_found', line=1, message="The provided model 'o3-pro' is not supported by the Batch API.", param='body.model'), BatchError(code='model_not_found', line=2, message="The provided model 'o3-pro' is not supported by the Batch API.", param='body.model')], object='list'), expired_at=None, expires_at=1749785925, failed_at=1749699527, finalizing_at=None, in_progress_at=None, metadata={'description': 'nightly eval job'}, output_file_id=None, request_counts=BatchRequestCounts(completed=0, failed=0, total=0))

Here is the code I am using to query the Batch API:

from openai import OpenAI
import time
client = OpenAI()

batch_input_file = client.files.create(
    file=open("batchinput.jsonl", "rb"),
    purpose="batch"
)

batch =client.batches.create(
    input_file_id=batch_input_file.id,
    endpoint="/v1/responses",
    completion_window="24h",
    metadata={
        "description": "nightly eval job"
    }
)
print("ID: ", batch.id)

while True:
    batch = client.batches.retrieve(batch.id)
    print(batch)
    if batch.status == "completed" or batch.status == "failed":
        break
    time.sleep(5)

if batch.output_file_id:
    print("File Output:")
    output_file = client.files.content(batch.output_file_id)
    print(output_file.text)

if batch.error_file_id:
    print("File Error:")
    error_file = client.files.content(batch.error_file_id)
    print(error_file.text)

And here is the jsonl file:

{"custom_id": "request-1", "method": "POST", "url": "/v1/responses", "body": {"model": "o3-pro", "input": [{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Hello world!"}],"max_output_tokens": 10000}}
{"custom_id": "request-2", "method": "POST", "url": "/v1/responses", "body": {"model": "o3-pro", "input": [{"role": "system", "content": "You are an unhelpful assistant."},{"role": "user", "content": "Hello world!"}],"max_output_tokens": 10000}}

I have verified this code works for other models like GPT-4o or o1. Using o3-pro-2025-06-10 produces the same behavior.

Same issue here. It’s a blocker from running a large set of tests on it.