Clarity on gpt-4.1 and o4-mini structured output support

Does anyone have clarity on Structured Output support for gpt-4.1 and o4-mini?

I’ve tested both gpt-4.1 and o4-mini for response_format = json_schema in chat completions API, and I get ‘Unsupported model’. This is code that works perfectly with gpt-4o and o3-mini in my Salesforce ai agent. Unstructured output API calls for both gpt-4.1 and o4-mini are working, so it’s not an access issue.

I know structured outputs came later for gpt-4o and o3-mini. But when OpenAI lists Structured Outputs supported on their models documentation page for both gpt-4.1 and o4-mini, what are they actually referring to? Tools and function calling? Do we know when something like response_format = json_schema will be supported?

If this is because I’m using Chat Completions instead of Responses API for my app’s ~100 API calls - I’m going to go for a walk and contemplate my life - then sit quietly and finally refactor to Responses API like a good OpenAI developer.

2 Likes

Running into the same issues with GPT 4.1 and structured output as well. Same code works just as intended with o3-mini, but was hoping to leverage the extended context of 4.1. I can get it to work about 15% of the time.

1 Like

OpenAI Support confirmed that function calling is the only structured output currently supported by gpt-4.1.

Response formats with json object / json schema are “not compatible with these models.”

They then tried to link me to an internal OpenAI document on Notion :slight_smile: which I obviously did not have access to.

Editing April 25th, 2025 - gpt-4.1 and o4-mini both have all structured outputs you would expect in the playground, so OpenAI Support (an AI Agent) was wrong. Looks like there are quite a few inconsistencies.

I’m getting them both to work most of the time, but o4-mini fails consistently on one schema (returning null). I switched back to o3-mini, which is good enough for this project. However I am concerned that 4.1 will fail occasionally, as where I switched to it, it is for the large context.

Same here, had to switch back to o3-mini from o4-mini.

I’m a Japanese speaker using the gpt-4.1-mini-2025-04-14 model and attempting to generate Structured Outputs with:

json
{
  "response_format": "json_schema"
}

However, I’m repeatedly encountering the following error:

json
{
  "error": {
    "message": "Invalid parameter: 'response_format' of type 'json_schema' is not supported with this model.",
    "type": "invalid_request_error",
    "code": null
  }
}

Strangely, this only happens when I submit Japanese prompts—English prompts work flawlessly.

Recently, several community threads have highlighted character-encoding troubles across the GPT-4.1 family, especially when multi-byte languages like Japanese or Chinese are used.
On Reddit, users report that while gpt-4.1-mini struggles with Structured Outputs, both gpt-4o-mini and gpt-4.1-nano handle them correctly, suggesting an instability unique to gpt-4.1-mini. Furthermore, .NET client libraries on GitHub show numerous “unsupported response_format=json_schema” errors, and Azure’s SDK often returns HTTP 400 stating “Supported values are: ‘json_object’ and ‘text’.”

Even in llama.cpp–based servers, json_object works but json_schema fails outright. Given this background, my leading hypothesis is that the mini-series models—with Function Calling’s strict:true mode automatically applied—have a schema-validation conflict when handling multi-byte characters.

Indeed, reports indicate that models like o1-mini and o3-mini auto-enable strict:true and sometimes mis-handle optional parameters, leading to schema mismatches. A recent LinkedIn technical article also warned of potential instability in non-English structured outputs (e.g., Arabic) on GPT-4.1-mini.

2 Likes

But GPT-4.1 and o4-mini says supported under Structed Output?

https://platform.openai.com/docs/models/gpt-4.1
https://platform.openai.com/docs/models/o4-mini

See if you can get them to give you a base64 link to the model weights :rofl:

The key to programming a good OpenAI support bot is that it will take no action and say that things are not possible, to discourage and discontinue further interactions.