Batching with GPT4-vision?

Hi all,

As are many of you, I’m running into the 100 RPD limit with the Vision preview API. OpenAI suggests we use batching to make more use of the 100 requests, but I can’t find any example of how to batch this type of request (the example here doesn’t seem relevant).

I’ve tried passing an array of messages, but in that case only the last one is processed.

Has anybody managed to batch Vision Preview requests and, if so, can you give some pointers?

Thank you.

1 Like

Hi @ab9c3fd6e8e04d7849a9

Welcome to OpenAI dev forum.

Yes you can “batch” images on gpt-4v by passing multiple images per message:

from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
  model="gpt-4-vision-preview",
  messages=[
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What are in these images? Is there any difference between them?",
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
          },
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
          },
        },
      ],
    }
  ],
  max_tokens=300,
)
print(response.choices[0])

More in docs

1 Like

Hi,

Many thanks for your response. This isn’t quite what I had in mind, though.

I have tried sending multiple images, as in your example, but the result will be to have a single response (useful for, as in your prompt example, comparing two or more images). But I want to send many separate requests in one (i.e. sending 10 requests and expecting 10 responses). This seems to be implied in the batching advice under the rate limiting page I linked to, but I haven’t found a way to do it.

Perhaps I could craft a prompt to force this single request with multiple images to separate its response per image sent (as JSON) but that would be a hacky and so less robust way of doing things.

Thanks again for your advice.

3 Likes

I believe @sps has shown the only way currently on the API docs, “batching” is sending one message with multiple images.

If you’re able to get consistent output in a different way though, let us know!

2 Likes

Hi Trenton,

Thanks for confirming, and to @sps for the original comment. It’s good to know I’m not missing an obvious solution to the problem.

Will indeed let you know if I manage to make a workable solution from the multiple-images-per-request option.

1 Like

Yes , asking for JSON list would be one of the best ways to go.

1 Like

Hi,
Right now it is not possible to send multiple prompts as batch for a single image with GPT4 vision?