We have an application extracting data from pdf files using the Reponses API and gpt-5.2.
We have a benchmark test on a specific pdf file, which gives ~1 error in 300 runs. Today:
- we suddenly had 11 errors in 300 runs
- and for a short time we saw the following notification on the openai platform page: GPT-5.2 doesn’t work with the Response API. We 're using the default model instead.
Coincidence or not?
I don’t find anything about this specific notification on this forum/online.
11 errors on 300 runs is more what we see with older openai models.
Is it possible that, as suggested in the notification, openai, occasionally, uses a different/older models behind the scenes?
