We are using `gpt-4o` and `gpt-4o-mini` fine-tuned models in production. As of this morning they were working fine (02/19/26), but as of ~20:50 UTC we began to experience complete and consistent failures for all chat completion requests using those fine-tuned models. The base models themselves are working as expected.
Here is the error message:
```
Chat completion failed: Error code: 400 - {‘error’: {‘message’: ‘There was an issue with your request. Please check your inputs and try again’, ‘type’: ‘invalid_request_error’, ‘param’: None, ‘code’: None}}
```
I ran a variety of API calls across gpt-4o and gpt-4.1 sizes with text only. As of right now, the AI models are working.
It took running a ft:gpt-3.5-turbo-0613 on Responses that that was working a bit ago to find where the API fails on even text:
API Error
Error: An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID req_87b20594f8f044e69505f04df4f14bb8 in your message.
Then that Responses API inference on ft:gpt-4.1-mini will go into a nonstop “assistant” loop with image input. Image input to ft-gpt-4o-mini seems fixed now, which had the same symptom.