Gpt-4-turbo-2024-04-09 not accepting images?

Hello there, I’m trying to test the new gpt-4-turbo-2024-04-09 model, but it’s giving me this error:

openai.BadRequestError: Error code: 400 - {'error': {'message': 'Invalid content type. image_url is only supported by certain models.', 'type': 'invalid_request_error', 'param': 'messages.[1].content.[1].type', 'code': None}} 

The docs say that this model supports images. I’m using the same message format as with the GPT-4 Turbo Preview model.

1 Like

It might be related to this distinction: GPT-4 Turbo now includes a Vision model. This version introduces the ability to utilize JSON mode and function calling in vision requests. The gpt-4-turbo identifier currently refers to this particular version.

I’m uncertain if the inclusion of JSON mode alters its functionality. However, from what I’ve gathered in the documentation, this appears to be the primary difference.

Yeah, so it should definitely still be accepting image inputs. Interestingly, the API call works with the gpt-4-turbo identifier, which is just supposed to be pointing to gpt-4-turbo-2024-04-09. But it doesn’t work with gpt-4-turbo-2024-04-09.

2 Likes

Hi!
I have the same issue with gpt-4-turbo-2024-04-09

  • for an image as URL - error Expected a base64-encoded data URL with an image MIME type
  • for am image as base64 - error Invalid content type. image_url is only supported by certain models.
import base64
from openai import OpenAI

client = OpenAI(
    api_key=os.environ.get("OPENAI_API_KEY"),
)

image_file = "test_file.jpg"

with open(image_path, "rb") as image_file:
    image_url_base64 = base64.b64encode(image_file.read()).decode("utf-8")

image_url = f"data:image/jpeg;base64,{image_url_base64}"

chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "describe a picture"},
                {"type": "image_url", "image_url": {"url": image_url, "detail": "high"}},
            ],
        },
    ],
    model="gpt-4-turbo-2024-04-09",
)

print(chat_completion)

And with gpt-4-vision-preview model both requests are ok.

UPD: Yeah, with gpt-4-turbo it’s ok

1 Like

confirmed that model is not supporting standard vision code.

here is example how I do it with the preview model:
def upload_to_gpt_vision(self, file_path, vision_text):
with open(file_path, ‘rb’) as img_file:
image_b64 = base64.b64encode(img_file.read()).decode(‘utf-8’)

    headers = {
        "Content-Type": "application/json",
        "Authorization": f"Bearer {os.environ['OPENAI_API_KEY']}"
    }

    payload = {
        "model": "gpt-4-vision-preview",
        "messages": [
            {
                "role": "user",
                "content": [
                    {"type": "text", "text": vision_text},
                    {"type": "image_url", "image_url": f"data:image/png;base64,{image_b64}"}
                ],
            }
        ],
        "max_tokens": 300
    }

    logging.info("Uploading image to GPT-4 Vision API...")
    response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
    return response.json()

I tested with the other model and it does not take it even with base64 encoded. So it maybe either mislabelled or perhaps some new format requirement?

This is not intentional – we’re working on a fix!

3 Likes

This should be fixed! Sorry for the trouble and let us know if it works now.

2 Likes

It works for me now, thanks for the super-fast fix!

1 Like