Chat Completion request vision model return 500s when system added

PROMPT_MESSAGES = [
    {
        "role": "system", 
        "content": "You are a experienced knight who are tasked to critique and provide advice knight on how to improve on wielding sword. Try to give a processional advice",
    },
    {
        "role": "user",
        "content": [
            "These are frames from a sword fighting footage that I want to upload. Generate a helpful sword fighting advice for the video I upload",
            *map(lambda x: {"image": x, "resize": 768}, base64Frames[0::163]),
        ],
    },
]

params = { "model": "gpt-4-vision-preview", "messages": PROMPT_MESSAGES, # "max_tokens": 200, } results in

openai.InternalServerError: Error code: 500 - {'error': {'message': 'Something went wrong processing one of your images.', 'type': 'server_error', 'param': None, 'code': None}}

Wondering if anyone was capable of running ChatCompletion with system for vision model

1 Like

Nvm. It seems like the issue was the ‘content’ that I am requesting.
Lemme experiment with this more

Still getting 500s w/o the system

I’m getting the same error using the API example from docs

openai.chat.completions.create({
        model: "gpt-4-vision-preview",
        messages: [
          {
            role: "user",
            content: [
              { type: "text", text: "whats in this image" },
              // Accepts Either a URL of the image or the base64 encoded image data.
              { type: "image_url", image_url: {url: "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"}},
            ],
          },
        ]
      });

not sure if it has anything to do with the recent outages but those usually return a more general error message

I checked the status page but it seems to be ok.
Now it’s returning values so I am guessing it was just finicky.

Also I was hitting an issue with token limits (I am tier 1 rn).
If there are more frames, you might encounter this

i have been having this error for the last 20 minutes or so… i guess its something internal of OpenAI

Same situation here. I guess its a server side problem.

2 Likes

Same here with these:

%pip install -U openai==1.1.0 langchain==0.0.333 langchain-experimental==0.0.39 --upgrade

chat = ChatOpenAI(model="gpt-4-vision-preview", max_tokens=4096, temperature=0.6)
chat.invoke(
    [
        HumanMessage(
            content=[
                {"type": "text", "text": "What is your name"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "https://raw.githubusercontent.com/langchain-ai/langchain/master/docs/static/img/langchain_stack.png",
                        "detail": "auto",
                    },
                },
            ]
        )
    ]
)

InternalServerError: Error code: 500 - {‘error’: {‘message’: ‘Something went wrong processing one of your images.’, ‘type’: ‘server_error’, ‘param’: None, ‘code’: None}}

1 Like

It is resolved for me now. I think it was an openAI error

1 Like

It is working fine for me now. Probably it was just a short break

How did you solve the problem? The GPT-4 vision-preview API has not been working for me for five hours already