GPT4-V API via code but not via Playground site, why?

Hi
Searched this already.

I wanted to quickly test out vision to evaluate if it would work for my intended project, only to discover it doesn’t show up in the Playground. But it’s available via code? So to test it, I need to actually code a working whatever. All I wanted to do was try it out.

I have credit in my account and can see the other GPT4 models, but not vision.
I can’t see it in any mode - assistants, chat, or completions.

Many posts here saying Vision is not available via the API but the docs (and many YouTube vids from the past 3 weeks) show it is.

Am I doing something wrong or is it missing in Playground intentionally?

The official playground doesn’t support the vision model.

Thanks for the spam, I’ve seen this already on a bunch of other posts.
Didn’t realise people were in here hustling when others are looking for legit answers. Where are the mods?

Right so OpenAI do this on purpose so only “real devs” can use vision via the API, and so exclude everyone who wants to use vision on pay-as-you-go basis in Playground, cool cool cool cool cool cool cool cool cool cool

1 Like

Sorry if my answer offended you. I’ve removed the recommendation part. Didn’t mean by “spamming”.

It’s just what it is. The official playground lacks many features and there are other products solving people’s pain.

It’s not quite as simple to use as the playground, but here’s some pre-built python code I made that you can use.

Just paste in your API Key, make sure the images you want it to look at are in the same folder, and modify the prompts to suit what you need.

import base64
import requests

# Function to encode the image to base64
def encode_image(image_path):
    with open(image_path, "rb") as image_file:
        return base64.b64encode(image_file.read()).decode('utf-8')

# OpenAI API Key
api_key = ""

# System and user prompts
system_prompt = "You are a helpful assistant."
user_prompt = "Tell me what you see."

# Paths to your images
image_paths = ["image1.jpg", "image2.jpg", "image3.jpg"]

# Encode images to base64
base64_images = [encode_image(path) for path in image_paths]

# Construct the payload
payload = {
    'model': 'gpt-4-vision-preview',
    'messages': [
        {'role': 'system', 'content': system_prompt},
        {'role': 'user', 'content': [{'type': 'text', 'text': user_prompt}] + [{'type': 'image_url', 'image_url': {'url': f'data:image/jpeg;base64,{img}'}} for img in base64_images]}
    ],
    'max_tokens': 800
}

# API request headers
headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {api_key}"
}

print("Sending request...")

# Make the API request
response = requests.post('https://api.openai.com/v1/chat/completions', headers=headers, json=payload)

print("Request complete!")

# Print the response
if response.status_code == 200:
    print(response.json()['choices'][0]['message']['content'])
else:
    print(f"Error: {response.status_code}, {response.text}")
2 Likes

Thanks all for the clarifications, OpenAI should document it better.

“Available via code but not via Playground.”
Looks like my “quick evaluation” is gonna take a while longer!

1 Like

Should still be pretty fast, if you end up having trouble or want to change something in the code but don’t know how I suggest pasting the code in and asking ChatGPT (Or the API).

Good luck!