Getting server_error. API Down? (Assistants + gpt-4o-mini: server error)

Hello, I’m receiving the following error in virtually all of my requests:

Unexpected run status failed. Full run info: { “id”: “run_F5rHERgb7rKF339dCcvbrooR”, “object”: “thread.run”, “created_at”: 1761849673, “assistant_id”: “asst_pGNvQwex0bsgpOUaQuN7AC2J”, “thread_id”: “thread_Ab6yYqADi5cnIk9mdLjiUIYe”, “status”: “failed”, “started_at”: 1761849675, “expires_at”: null, “cancelled_at”: null, “failed_at”: 1761849675, “completed_at”: null, “required_action”: null, “last_error”: { “code”: “server_error”, “message”: “Sorry, something went wrong.” }, “model”: “gpt-4o-mini”, “instructions”: “Se c”

3 Likes

Same here, 0% availability.

1 Like

Gabriel, from what I’ve seen, those are models on the 40-mini that aren’t working. Do you use them as well?

1 Like

I just tested this, switched to 4.1-mini and it worked. Everything in 4.1-mini is returning “Failed due to unexpected execution status”.

1 Like

@OpenAI_Support help us please

1 Like

Change for 4o is working here for me

I tested 41, 4o, and 4o-mini, and none of them worked. Does anyone know how to fix this?

last_error: { code: ‘server_error’, message: ‘Sorry, something went wrong.’ },

This is the 3rd or 4th time this has happened in the last couple months. Last time it took 8 hours to fix it but getting OpenAI to be aware of the issue is the first problem. The time before that was 24 hours to be aware and fix the issue.
Beware of switching to 4o from 4o mini as the billing is 15X higher than 4o mini.

1 Like

I switched to 4.1 mini and it’s working for the tasks we needed. The cost is 3X higher but at least it’s working. Planning to switch back to 4o mini when the issue resolves.

It’s probably back to normal… I’m not sure though.

It might be an error when calling threads.runs.create or a similar interface.

Someone knows what caused this, I had the same problem, and I’m preucupated in using the 4o-mini in my chat agents.