Edited by moderator:
The comments below have been posted as a reply to a previously resolved incident:
Original post:
Still happening with gpt-4.1-mini
Edited by moderator:
The comments below have been posted as a reply to a previously resolved incident:
Original post:
Still happening with gpt-4.1-mini
Despite the linked OpenAI status update stating this was resolved; Iām running in to this from around 17:00 UTC today only with 4.1-mini
Iāve been experiencing this for the past few weeks. At some point, it seemed to be solved, but Iām getting hammered with this error today
I too am running into this today. Unable to use gpt-4.1-mini.
same here ! what is happening?
I m currently getting everytime i call client.response.create withe gpt 4.1-mini this responde ā{āerrorā: {āmessageā: āModel not found gpt-4.1-miniā, ātypeā: āinvalid_request_errorā, āparamā: āmodelā, ācodeā: None}}ā
Experiencing the same issue.
Stilling the same issue
Any prompt sent to the assistants returns the thread.run.failed event with Sorry, something went wrong.
Weāre urgently looking into this now. Will keep you posted here and weāre working on an update to the status page.
I had this issue a few weeks ago using the Response API. I was executing a batch of requests per minute.
I switched to the Completion API and it worked again.
Why hasnāt https://status.openai.com/ been updated? The 4.1 mini model still doesnāt work with assistants!
Still happening to us, any timeline for when it will be resolved?
Also getting this issue. I get the error when i use āgpt-4.1-miniā as the model. If i use āgpt-4.1-mini-2025-04-14ā it works
there is a bug with the api. i am getting this error
{
āerrorā: {
āmessageā: āModel not found gpt-4.1-miniā,
ātypeā: āinvalid_request_errorā,
āparamā: āmodelā,
ācodeā: null
}
Hello, the issue reported above still persists; therefore, the status page should be updated to reflect the current condition as well as an estimated timeline for a solution.
Still happening on our end as well, Iām noticing it is only happening for requests with multiple user messages? Not sure why that would be the case.
Dozens of other requests to 4.1 mini work fine, and they only have 1 user message.
Resolved as of ~15 min again. Sorry again for the issues.
I solved my problem by using gpt-4.1-mini-2025-04-14
instead of gpt-4.1-mini
a few weeks ago, BUT that also started to fail intermittently.
Hey @aeum3893,
We had a transient incident today (July 21, 2025, starting around ~9:20 AM PT) where calls to gpt-4.1-mini
and the dated variant gpt-4.1-mini-2025-04-14
returned āModel not foundā. Engineering reassigned engines and the errors subsided ā
.
If you're still running into issues, could you let us know:
If itās still failing, try a quick retry with the same payload, and confirm if gpt-4.1
(non-mini) works for you.
Thank you!
Switched over to gpt-4.1
and it works great. But the gpt-4.1
model is expensive, so I switched back to gpt-4.1-mini
now that the issue discussed in this thread seems to be solved. It works, but now it generates responses that are somewhat off. (Currently pairing the model with a OpenAI Vector Store and using the File Search builtin tool)