Hitting rate limit on gpt-4-vision-preview with first query?

Hi - I’m getting the below response when passing in a 600x300 .jpg with a short prompt:

Rate limit reached for gpt-4-vision-preview in organization org-[MY-ORGANIZATION] on tokens per min. Limit: 10000 / min. Please try again in 6ms. Visit OpenAI Platform to learn more.

Any ideas on why this might be? I’ve also tried using the API from my personal account with the same result.

Calling it using:

OpenAI Vision API function

def get_image_analysis(api_key, prompt_text, image_path=None):
client = OpenAI(api_key=api_key)
headers = {
“Content-Type”: “application/json”,
“Authorization”: f"Bearer {api_key}"
}
image_data = encode_image_to_base64(image_path)

payload = {
    "model": "gpt-4-vision-preview",
    "messages": [
        {
            "role": "user",
            "content": prompt_text 
        },
        {
            "role": "user",
            "content": f"data:image/jpeg;base64,{image_data}"
        }
    ],
    "max_tokens": 300
}
response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
return response.json()
1 Like

Hello! I believe it’s to do with which tier you are on, which is related to how much you have spent with OpenAI. More info is at this link; OpenAI Platform

I started to get this on first request too.

Error code: 429 - {‘error’: {‘message’: ‘Rate limit reached for gpt-4-vision-preview in organization org-ptXXX on tokens per min (TPM): Limit 10000, Used 6870, Requested 3745. Please try again in 3.69s. Visit https://platform.openai.com/account/rate-limits to learn more.’, ‘type’: ‘tokens’, ‘param’: None, ‘code’: ‘rate_limit_exceeded’}}

The numbers of tokens and the cool off time seem to be random (only requested is fine). My billing page does not show any usage of the model.

The error message returned is misleading or there is some bigger problem.

1 Like