Hi,
I am trying to use batch API feature for the reasoning models with structured output. I tried doing json schema for the structured output first but according to the docs:
" response_format: {type: “json_schema”, …}is only supported with thegpt-4o-mini,gpt-4o-mini-2024-07-18, andgpt-4o-2024-08-06` model"
And I get an error following this approach which makes sense
{“status_code”: 400, “request_id”: “…”, “body”: {“error”: {“message”: "Invalid parameter: 'response_format' of type 'json_schema' is not supported with this model. Learn more about supported models at the Structured Outputs guide …
Can anyone help with doing the structured output part with text_format and BaseModel instead of json schema for batch inference?
BaseModel as a streamable input to the response_format (or text → format) .parse() function parameter is only a convention of the Python SDK. It unpacks and wraps the schema properly for it to be a strict structured output specification JSON object for the RESTful request over the wire. The same as you hand-coding it.
Were the batch API not to be rejecting the parameter completely, one way you could produce the API calls that need to go into “batch” JSONL is to code a replacement http transport on the SDK call client, one that logs instead of sends. (see the github readme.md)
It sounds like you are trying Responses calls via batch. I would send as /v1/chat/completions and see if the parameter response_format is then accepted (with a 24 hour turnaround on self-answering).