GPT-5.2 doesn't work with the Response API. We're using the default model instead

We have an application extracting data from pdf files using the Reponses API and gpt-5.2.

We have a benchmark test on a specific pdf file, which gives ~1 error in 300 runs. Today:

  • we suddenly had 11 errors in 300 runs
  • and for a short time we saw the following notification on the openai platform page: GPT-5.2 doesn’t work with the Response API. We 're using the default model instead.

Coincidence or not?
I don’t find anything about this specific notification on this forum/online.

11 errors on 300 runs is more what we see with older openai models.
Is it possible that, as suggested in the notification, openai, occasionally, uses a different/older models behind the scenes?

2 Likes

Have you verified your identity? I had the near-same error message in AgentKit.

Hilariously, it refused gpt-5.2 by substituting it with …. gpt-5.2 :rofl:

And it works! Glory be to vibe-coding!

1 Like

…we saw the following notification on the openai platform page

So, you are using the Playground and not the Response API? If you are using the Response API then what is the error code you are receiving?

Just now tested a PDF analysis with no issues: