Chat Completion responses suddenly returning malformed or inconsistent JSON

Starting this morning, the Chat Completion API has begun returning malformed or inconsistent JSON.

I am using gpt-4o-2024-11-20 with JSON-style responses. My prompts have not changed, but the model is now intermittently returning JSON objects with:

  • Missing dictionary keys

  • Keys placed at the wrong level

  • Keys spelled differently between requests

  • A different structure from the one it was reliably producing until now

The specific issue varies by request. And sometimes the response is just fine.

Expected output format (what has always been returned):

{
    "object": {
        "key1": "REDACTED",
        "key2": "REDACTED",
        "key3": "REDACTED"
    }
}

Example of the malformed output (starting today):

{
    "key2": "REDACTED",
    "object": {
        "key1": "REDACTED"
    },
    "key3": "REDACTED"
}

In other cases, the keys are spelled differently or appear in unexpected positions.

Has something recently changed on the model or API side that could cause previously consistent JSON structures to become unstable? And is there a known workaround or path to resolution? Anyone else experiencing this issue?

This is a production workflow that has been stable for a long time (almost a year!), so any guidance would be appreciated.

THANK YOU!

2 Likes

Hi and welcome back!

From your description I read that you are using JSON mode and not structured outputs?
You are probably aware the first one means setting the response_format to { "type": "json_object" } while the second requires strict: true.

If so, this would not be an explanation why you are suddenly observing different behavior but could be a straightforward solution.

1 Like

You are correct - this would NOT be an explanation.

Running the same model with the same inputs for a year, and then the behavior changes == OpenAI messed with the model or the endpoint.

3 Likes

This is a real bug.
I have seen reliable {“type”:”json_object”} responses with 0% fail rate for >1 year now, but around Nov 20th this stopped working reliably. It now gives truncated ouputs and invalid jsons.

Using python openai.chat.completions.create via AzureOpenAI (or the Async one for that matter).