New "input-images per min" RateLimitError

Began receiving errors this morning from my AI Agent service:

OpenAI RateLimitError (Request too large for organization org-______ on input-images per min: Limit 10, Requested 26. Visit https://platform.openai.com/account/rate-limits to learn more.): Trying again in one second.

My account is already in Usage Tier 3.
I cannot find any supporting documentation around a limit on “input-images per min”.
My current tokens per minute rate is not being surpassed.
My Agent heavily relies on images, with each request expected to have anywhere from 10-30 images.

Errors began happening this morning (Nov 4th) around 9:00 AM Eastern.
Is this an intentional change in Rate Limit policies?
If so, could supporting documentation be provided, and how can rate limits be increased?

Thank you,
-Austin

2 Likes

I am not sure on limits on the server-side.

The only thing I go to when checking ratelimits is the official page by OpenAI.

However one thing you could do is split up each request into multiple, so 26 requested images get split up into 2 pairs of 10 images (and one 6 image batch). That way you would still have the 26 images.

I know this probably isn’t what you’re looking for but I wanted to propose an alternative until this issue is looked at by someone more knowledgable than me. :hugs:

can you share your project with me, I have a similar project that I should sent images to chatGPT but I don’t know how. If you have documentation to share with me I will be happy and tank full. I have read this https://platform.openai.com/docs/guides/vision.

Thank you for the suggestion.
This is how the project used to work, prior to Tools and Images being supported by a single model.
Though we have found that breaking the images up in this way destroys the continuity in the context of the task at hand, greatly decreasing the overall accuracy of the Agent.

(edit 1)
I am using the gpt-4-turbo model, if relevant.

(edit 2)
I see Images Per Minute (IPM) mentioned here, but with no corresponding column in the table below letting us know what the IPM limits actually are.
Ten images per minute is a horribly restrictive rate, especially for a Tier 3 account where my TPM Limit is 600,000.

1 Like

I sent 45k tokens of images to gpt-4-turbo-2024-04-09, gpt-4-vision preview, and gpt-4o, using this now undocumented method to add to a list where the first is the content string:

for base64_image in base64_images:
    params["messages"][1]["content"].append({"image": base64_image})

Indeed, only getting bounced with the alias gpt-4-turbo, which points to 04-09

openai.RateLimitError: Error code: 429 - {'error': {'message': 'Request too large for organization org-1234 on input-images per min: Limit 10, Requested 118. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'input-images', 'param': None, 'code': 'rate_limit_exceeded'}}

So you can use the snapshot model name – and we can ask OpenAI, “what are you even thinking of doing??”

2 Likes

Thank you very much!

I can confirm that changing the model name from “gpt-4-turbo” to “gpt-4-turbo-2024-04-09” fixed the issue (despite the fact that according to this page, those two aliases are identical).

Hopefully not a sight of future Rate Limits to come.
Will select your answer as the solution for now.
-Austin

Can you help me how to send lot of images to chat GPT by this way “Uploading Base64 encoded images”—the code is up just for 1 image I want to send lot of images

This is unfortunately a scenario where the best thing I can do to help is not provide the solution.
You have the tools you need. Do not be afraid of the unknown. Try things out until it works. Use the language model itself to produce and iterate on your code.
Best of luck.
-Austin

thank you ,I try my best :wink:

1 Like

Also been effected by this

Any response from OpenAI on this strange new rate limit yet?

Check the solution of the topic. :hugs:

This is still occuring. Please assist OpenAI

Running into this issue just now with gpt-4-turbo-2024-04-09. It was working fine this morning when I was doing some testing but is now getting killed every time more then 10 images are used + requested in less then a minute span

Yep, it seems they hit both gpt-4-turbo and its destination “gpt-4-turbo-2024-04-09” with this. Here’s a reason why they both may be affected: this forum topic initially pointing out a bypass of what OpenAI intended - to break production apps and prevent growing chats with images.

Perhaps part of the plan, along with shutting off the gpt-4-1106-vision-preview model in a month that embarrasses gpt-4o with its knowledge quality, speed, obedience, and additional image methods. Force developers off.

Log of 15 images

"gpt-4o" 8.6 seconds
"gpt-4o-2024-08-06" 5.7 seconds
"gpt-4o-2024-05-13" 7.7 seconds
"gpt-4-turbo" Error code: 429 - {'error': {'message': 'Request too large for organization org-
"gpt-4-turbo-2024-04-09" Error code: 429 - {'error': {'message': 'Request too large for organization
"gpt-4-turbo-preview" Error code: 400 - {'error': {'message': "Invalid type for 'messages[1].content[0]'
"gpt-4-0125-preview" Error code: 400 - {'error': {'message': "Invalid type for 'messages[1].content[0]'
"gpt-4-vision-preview" 5.6 seconds
"gpt-4-1106-vision-preview" 7.1 seconds
"gpt-4o" 5.3 seconds
"gpt-4o-2024-08-06" 9.1 seconds
"gpt-4o-2024-05-13" 8.2 seconds
"gpt-4-turbo" Error code: 429 - {'error': {'message': 'Request too large for organization
"gpt-4-turbo-2024-04-09" Error code: 429 - {'error': {'message': 'Request too large for organization
"gpt-4-turbo-preview" Error code: 400 - {'error': {'message': "Invalid type for 'messages[1].content[0]'
"gpt-4-0125-preview" Error code: 400 - {'error': {'message': "Invalid type for 'messages[1].content[0]
"gpt-4-vision-preview" 4.7 seconds
"gpt-4-1106-vision-preview" 5.7 seconds
1 Like

Also began receiving the same errors this morning; have un-checked the Solution.
Off to Claude we go…
I’ve still not been impressed with gpt-4o to date, but 3.5 Sonnet is killing it in terms of intelligence, context window size, and pricing.

OpenAI, please provide context on this change.