Since the last couple of days, calls to Assistants API fail with “Sorry, something went wrong” in the following scenario. This was working fine up until a couple of days ago. It still works with GPT-3.5, but not GPT-4 and GPT-4.o models.
The assistant generates a chart using code interpreter in response to a user prompt First run is fine.
In the same conversation, if user asks a follow-up question that needs chart generation again using code interpreter, it fails with the above mentioned error. This is the last_error from the run response object
“last_error”: {
“code”: “server_error”,
“message”: “Sorry, something went wrong.”
}
Failing with models - GPT-4-Turbo, GPT-4.o.
Works with -GPT-3.5-Turbo-0125
We contacted the OpenAI team regarding this issue, and they informed us that the run failed because a necessary file had been deleted. We had been deleting image files generated by the code interpreter after successfully downloading them. It appears that this action now causes the run to fail.
Once i removed the deletion piece from the code, it works with both GPT 4 and GPT 4o.
I am having the same problem using GPT-4o, all I did was update a thread specific vector store with some files for the assistant to use nothing else. I am also getting the “Sorry, something went wrong” infamous error. Any success in troubleshooting this? thanks
We have also been facing this error when trying to plot any chart using code interpreter within Assistants API. Upon contacting OpenAI support they mentioned that it’s a storage issue which it’s clearly not.