The API responds with { "threadId": "thread_{id}", "messageId": "msg_{id}" }
without any assistant response.
Alternative flow
We also have an alternative way of communication via WhatsApp. For this, we call the Assistants API directly. When an image is appended, we only get the following error as response: server_error: Sorry, something went wrong.
We are getting the same issue. Text works fine, but if an image is attached to request we get the something went wrong error.
This is both in the playground and via API call. I am not sure how to get help for this issue, it’s been almost a 24 hour outage and we’ve had to implement a backup method
We already had a backup method partially added for cases when OpenAI went down, for service continuity reasons. We just extended this so now we’re operating exclusively on our backup method. We plan to expand it even more so we’re covered in all cases if multiple providers go down.
I really liked using Assistants here, and am sad to see them go. But, since Assistants are being phased out anyway it might be time to move on sooner than later.
Hopefully someone from OpenAI will respond here and fix the issue, but I’m guessing it’s not a priority
We are getting the same errors, via API and when testing in developer. Assistants are supposed to be online for a few more months, so it needs to be fixed.
I’d assume other companies like ours are still planning and scheduling the migration to the new methods required in the future.
I’m experiencing the same issue in my app, and I also received a message from the OpenAI incident system about “High errors with image generation.”
This morning, I received another message stating that the issue was “resolved,” but it’s still not resolved for me.
Runs always fail when there are images in the message list.
So this has been down for a few days now, has anyone else ever had this kind of experience with the OpenAI api being unreliable (not fixing things)? Makes me question how future developments will play out when they update, etc…
Hi @vb Any news? I just went through 3 days of support saying the issue was probably on my side, and now they say they’ll escallate to the engineering team, but it will probably take a couple of days for them to respond to get more information from me (???).
It seems like no one is investigating this at all on OpenAI’s side, we need a little assurance here that someone is really looking into this issue.
Hello, we face this issue for days now and I would like to ask if anyone have found a workaround by using other model versions while gpt-4.1 have these issues.