Tried everything with RateLimitError: Error code: 429 with gpt4-o

Hi,
I have frustrating issue. Let me give the code to save all of our time.

I’ve seen the similar questions, but their solutions didn’t work. I already pass the base64 images using image_url as suggested in those posts.

result = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                "role": "system",
                "content": prompt_instruction,
            },
            {
                "role": "user",
                "content": [
                    "These are the frames from the video.",
                    *map(
                        lambda x: {
                            "type": "image_url",
                            "image_url": {
                                "url": f"data:image/jpeg;base64,{x}",
                                "detail": "low",
                            },
                        },
                        base64Frames,
                    ),
                ],
            },
        ],
    )

I give a separate prompt_instruction. This setup works for ~7s videos, but not for even a 50s video. For 50s video it throws,

openai.RateLimitError: Error code: 429 - {‘error’: {‘message’: ‘Request too large for gpt-4o in organization org-orgId on tokens per min (TPM): Limit 30000, Requested 89525. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more.’, ‘type’: ‘tokens’, ‘param’: None, ‘code’: ‘rate_limit_exceeded’}}

I basically send around 120 frames from the video (pre-processed) and with the detail:low so I naturally expect to cost around 120*85 + <instruction_tokens_count> let’s say 12k tokens.

But how did I exceed by requesting around 89k tokens? I don’t really get this. I followed official docs and I “think” my code is right.

Anyone please kindly help!

References,

Update:
This is the output of print("usage is ", result.usage)

usage is CompletionUsage(completion_tokens=62, prompt_tokens=3638, total_tokens=3700)

This is when i set the base64Frames[0:38], (to use the first 38 frames only), when i change this to 39, I get error,

'Request too large for gpt-4o in organization org-orgId on tokens per min (TPM): Limit 30000, Requested 30250.

It turns out that Open AI considers separate amounts per image (detail:low) for,

  • Billing (85)
  • Rate Limit (~764)

This is not confirmed but if someone can confirm, it’s best.

More info - TPM Limit Exceeded for my first Vision API request - #18 by _j

1 Like

I think you should get a higher rate limit