Persistant 0500 error even after doing what support has said

For three days, I have had this issue with all of my calls, i have changed my API and done calls pulled straight from the API refs. I am at my wits’ end, please help!

{
“model”: “gpt-4.1”,
“messages”: [
{
“role”: “user”,
“content”: [
{
“type”: “text”,
“text”: “What is in this image?”
},
{
“type”: “image_url”,
“image_url”: {
“url”: “
}
}
]
}
],
“max_tokens”: 300
}

Your URL does not seem complete. That is likely because you did not put your JSON into a forum “preformatted text”.

You can edit your post, select the entirety of your object, and then press the format bar </> button, and save again to show us your actual request body.

Here’s making your request with Python, the openai library module, and a local file img1.png:

import base64,openai;from pathlib import Path
e=lambda p:[{"type":"image_url","image_url":{"url":(
f"data:image/{Path(f).suffix[1:]};base64,"
f"{base64.b64encode(Path(f).read_bytes()).decode()}"),"detail":"low"}}
for f in p];m="gpt-4.1"
p={"model":m,"messages":[{"role":"system","content":"Vision assistant"},
{"role":"user","content":[{"type":"text","text":"Describe image"},
*e(["img1.png"])]}],"max_completion_tokens":300}
print(openai.Client().chat.completions.create(**p).choices[0].message.content)

Just joking (although that works).

Here’s sending your body message with Python, httpx library, an internet URL for the image, and pretty-printing the return JSON:

""" Vision chat completions method demo

Note: OpenAI SDKs will automatically infer the following
client arguments from their corresponding environment variables:
- `api_key` from `OPENAI_API_KEY`
- `organization` from `OPENAI_ORG_ID`
- `project` from `OPENAI_PROJECT_ID`
"""
import os, json, httpx

body = {
    "model": "gpt-4.1",
    "max_tokens": 300,
    "top_p": 0.5,
    "temperature": 0.5,
    "messages": [
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What’s in this image, in 10 words?"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "https://live.staticflickr.com/7151/6760135001_58b1c5c5f0_b.jpg",
                        "detail": "low"
                    }
                }
            ]
        }
    ]
}
apikey = os.environ.get('OPENAI_API_KEY')
headers = {"Authorization": f"Bearer {apikey}"}
url = "https://api.openai.com/v1/chat/completions"
response = httpx.post(url, headers=headers, json=body)
if response.status_code != 200:
    print(f"HTTP error {response.status_code}: {response.text}")
    raise
else:
    print(json.dumps(response.json(), indent=3))

You’ll get errors if you are trying to send that “messages” parameter to the Responses API, though. It takes "input", with different types.

I am using bubble .io to send the API call, it is usually fine, I am just using the code from the API ref copyed not sure if that helps you out. i have copyed the body from yours and i am getting a error saying its not formateted as Json

:shaking_face: sorry forgot to reply to you directly

The reason it is not JSON is because it is Python code.

Where there are lists or dicts that could be extended, there are trailing commas that anticipate adding to lists, which is a common programming style.

If you just remove any commas after the last object in a group, it should pass (or, you could just ask ChatGPT: “Make this valid JSON by removing trailing commas.”)

The “max_tokens” parameter is deprecated. Its replacement is “max_completion_tokens”. If it was finally shut off for a model (but not validated by the API), that could be the reason for a status 500 error.

Unfortunately, the system not working I’ve done cause the previously did work until very recently with no changes and I’m still getting the same error. I feel like the source of the issue might be coming from somewhere else.

I have it working using the responces API where i was using completions before so i am assuming the issue is from the chat completions end of things

An endpoint can be turned off in a project’s API key. That’s one place to find out why there is a problem affecting only you.

Also - try a different AI model.