GPT-4-Vision HTTP 400 Bad Request Error

Hi,
posted this question yesterday already but unfortunately only recieved off-topic comments…
I am using the Vision GPT-4 Preview model for a research project.
After 2-3 messages I send to the model and that run fine i recieve:

I keep running into HTTP Request: POST https://api.openai.com/v1/chat/completions “HTTP/1.1 400 Bad Request” errors using the Python API on some queries using GPT-4 Vision Preview.

This is the code:

response = self.client.chat.completions.create(
messages=messages,
model=self.model_name,
max_tokens=4096,
seed=42,
temperature=0.1
)

It worked fine until a couple of days ago without ever returning errors.
Now, it happens very incosistently on some messages, others work. Messages is valid json code every time and has the same number of tokens.

Any ideas on why this happens? Thanks a lot in advance!

How do you manage the messages? Are you sending base64 or URL for the images? When does the error occur? After you send another query with image? Or even with just text query? It might help if you can give a typical flow of conversation so that we can try to reproduce the error in our end.

Messages are send b64 encoded with text.
System query: User query → b64-images, then user query + other b64-images in one message call. It works fine for the first 1-2 iterations (same format for every request), then breaks with HTTP400.

A template would look like this:
[
{
“role”: “system”,
“content”: [
{
“type”: “text”,
“text”: “”
}
]
},
{
“role”: “user”,
“content”: [
{
“type”: “text”,
“text”: “MY TEXT HERE>”
}
]
},
{
“role”: “user”,
“content”: [
{
“type”: “text”,
“text”: “MY TEXT HERE>”
},
{
“type”: “image_url”,
“image_url”: {
“url”: “data:image/jpeg;base64, ”,
“detail”: “high”
}
}
]
},
{
“role”: “user”,
“content”: [
{
“type”: “text”,
“text”: “”
},
{
“type”: “image_url”,
“image_url”: {
“url”: “data:image/jpeg;base64, ”,
“detail”: “high”
}
}
]
},
{
“role”: “user”,
“content”: [
{
“type”: “text”,
“text”: “”
},
{
“type”: “image_url”,
“image_url”: {
“url”: “data:image/jpeg;base64,”,
“detail”: “high”
}
}
]
}
]

I have a kind of dataloader that iterates over the conversation: Each request is treated independently and sends the above message to the api. it worked fine until a couple of days ago, now it stops after every 1-2 messages and i need to wait and restart again. …

Let me confirm a few details, you are always sending this way?

system: system
user: text
user: text + image
user: blank text + image
user: blank text + image

Or each user message, sent independently?

1 Like

everything send all at once. First system message, then user text only with instructions. then text+images to give some example solutions to the model. Then it will recieve one final image with instructions. All send in one request. It worked fine last week, since the weekend it only works once in a while - while all messages are created equally and are all valid JSON.

I tried your message pattern and tested it several times but I cannot recreate the error you are receiving. Is there more detail to the 400 bad request error?

My request

[
  {
    role: 'system',
    content: 'You are an AI assistant specializing in fashion and image analysis.\n' +
      'Your primary function is to assist users with inquiries related to fashion images.\n' +
      'You can analyze images to identify clothing items, suggest similar items, provide fashion advice, and even predict upcoming trends based on the images provided.'
  },
  { role: 'user', content: [ [Object] ] },
  { role: 'user', content: [ [Object], [Object] ] },
  { role: 'user', content: [ [Object], [Object] ] },
  { role: 'user', content: [ [Object], [Object] ] }
]

Unfortunately theres no more info in the error message. I decreased the number of images that get send per query and this prevents the error. Still confusing that it worked completely fine last week and the error now still only occurs every 2 or 3. request …