Is the Rate Limit of 100 Requests for GPT-4 Vision Preview Hindering Your Development?

Why is the daily request limit set at just 100? While I understand that this is a preview phase, it would be beneficial to allow for a higher request limit. This would enable developers to create more usable applications during the preview period, ensuring that when the fully functioning model is released, there are already some applications in use. The current limit of 100 requests per day is causing unnecessary delays in the development process.

{‘error’: {‘message’: ‘Rate limit reached for gpt-4-vision-preview in organization xxxxxxxxxx on requests per day (RPD):
Limit 100, Used 100, Requested 1. Please try again in 14m24s. Visit to learn more.’, ‘type’: ‘requests’, ‘param’: None, ‘code’: ‘rate_limit_exceeded’}}


Yes, absolutely. We have a use case ready to go and have almost completed development, but the rate limits have brought this to a halt.

We’re going live with a project in a few weeks that will require 4000+ images to be processed per day. We’d really appreciate a relaxation of those rate limits before then!!

1 Like

I truly agree. Let the people test it properly. It makes no sense.

1 Like

Any indication from OpenAI when these limits will be raised?


They have apparently blocked new plus subscribers. I’m assuming based on that they are VERY tight on resources. So these rate limits are there to ensure everyone who’s currently using their services fairly can


hear hear, i concur. hope the limit gets increased soon T_T

1 Like

Yeah, this is super slow, started looking for other options to batch process some data, but vision seems to work the best for my more technical stuff. But at this rate it will take a month to process the data.


Yes! Absolutely, Looking forward to increase in the rate limits!

I’m facing a comparable challenge, with an estimated 3 to 4 months needed for data processing due to existing constraints. In response, I’m enhancing my prompts and optimizing the data processing workflow for efficiency.

I’m considering employing GPT-4’s vision capabilities to analyze and detail multiple images in one request. Following this, I could use a model like ‘gpt-4-1106-preview’ to process the data along with the image descriptions. However, this method remains untested, and I’m unsure about its effectiveness and suitability for my particular requirements.

I’ve been limited by the limit of 100 images as well. The only thing I’ve found so far (hope it helps someone), waiting for OpenAI to increase its limit, has been to

  • use Microsoft Azure AI Computer Vision to analyze an image
  • feed the analyzed result to a GPT-4 to output my desired result

Indeed, from my research, Azure AI cannot by itself do GPT so it only describes what’s on a picture.

The limits are generous for Azure tho, so it works well for me at this time.

1 Like

Just a heads up for you all…I noticed this morning that the limit appears to have changed from 100 requests per day to 1000.

It’s now 4000 RPD for Tier 5 accounts.

1 Like

@kjordan do you have any insights on where to find information about the updated limits? It appears that the Rate limits - OpenAI API documentation hasn’t been revised yet to include these changes.

@HenriqueMelo The limit is different for different accounts, you can check your current limit by this link:

I had no problems with the earlier limit on a platform of 35 users, but thanks for raising it! :slight_smile:

@kjordan Thanks. It showed that my limits have been adjusted to match my current tier. Yet, when I checked the official documentation, it seems that the tier-specific limits haven’t been updated there.

In this post Vision restrictions for API - 100 RPD - API - OpenAI Developer Forum logankilpatrick explained that the docs will be updated soon.

@weston-road-flows Azure dense captions provide JSON output. I’m curious what the use case is for feeding it into GPT.

Given how much I’m sure OAI want us to use the tools, I’m sure the reason is resource based.

This leads me to think about scale. We barely have a user base at all compared to how this will be at commercial scale, so what will we need to have in place to be able to deal with fully commercial requirements from business users in a couple of years?

This may be no problem as they have it all figured, but I’d like to get feedback from the mothership that they have it all under control?

Correct, there are not enough GPU’s in the world to service all the demand developers have. With that said, the Turbo and vision rate limits have both gone up considerably in the last 2 weeks. I have a PR open (should be merged soon) to update the rate limit guide with our current limits:

There will also be a big benefit when the Vision and Turbo models are combined into a single model model you can hit. For various engineering reasons, they are separate today but will be unified “soon” which should help with this.


Just to comment on this last part, we are making the investments today (and have been for the last 3 years) to make sure we continue to have capacity to scale. Rest assured, we are going to have compute for folks to use our models in the future.

I am going to close this thread out, just so that if people have new feedback on the limits we capture it in another thread. Thanks for all the thoughts and can’t wait to see what you all build!