Error code: 400 - {'error': {'message': 'Invalid model: `gpt-3.5-turbo-1106` does not support image message content types.', 'type': 'invalid_request_error', 'param': 'model', 'code': 'invalid_type'}}
The errors seem to occur randomly and are not linked to any specific prompts in my evaluation set.
The assistant is created with the v2 API, has tools code_interpreter and file_search, and an attached vectorstore with a few small files in it. It also has a few functions connected to it that pull data from a database for bot interpretation and display.
The error seems to come from innocuous messages, such as “What is my name?” or “what were my intervention results from yesterday?” that do not imply creating an image.
Queries that do imply creating an image “Create a graph of my intervention percent success over the past month” actually work and yields an ImageFileContentBlock with the graph image as a file_id, and a TextContentBlock. These are generated with no errors.
I’ve tried other gpt3 models, with similar results. I have not tried gpt3.5-turbo-16k*, and gpt4 is not in scope at this time.
OpenAI modified some stuff on their end. I’m getting this from the last 24 hours. 3.5 keeps throwing the error if it is trying to plot or respond to user who previously received a plotted graph
I’ve done a bit more testing.
I’ve bumped my openai package version to 1.23.6, to see if that had a fix for the error. It does not.
It looks like I get this error the next message received after the model correctly creates and returns an ImageFileContentBlock and file.
The very next query on the same thread will return the error I see above, saying that the model does not support image message content types.
For clarity, this is repeated messages on the same thread_id, not creating new threads. This is important for my use case to retain context.
Resetting the thread_id works, but this loses user context.
One possibility is that OpenAI thought it would be clever if vision could see the results of an image created in the mount point, so loads that into a message. Not accounting for all the other models one might use.
Careful inspection of GPT-4 token counts, steps, or ability to “see” such an output could confirm if this is the case.
I think retaining the thread_id is important in most use cases.
This is definitely a server-side bug on OpenAI’s end. Downgrading/upgrading any of the SDK’s (python/js/cURL) will exhibit the same behavior with the reproduction you defined.
Getting also this problem with v2 version, if thread contains image generated by Assistant e.g. when using code interpreter, the next new user request in the thread will throw this error.
Another post is describing the same issue currently in gpt-4. It sounds like it sometimes works in gpt-4-turbo-2024-04-09 and not in other gpt-4-turbo* versions
One possible solution offered was to specify that the images returned are in SVG format, preventing the chatbot from erroring. This fix didnt work for me.