Are there new GPT-4o Limits?

Currently Tier 4, but was just wondering if we’re going crazy or if the gpt-4o limits have been lowered drastically? Yesterday, we did exceed this limit on numerous occassions without any issues. I haven’t seen any notification of this change over the past 24h.

3 Likes

That’s the free tier request-per-minute rate. Since your tier showing at the top means you didn’t accidentally pick the wrong org, looks like something broke for the organization or the backend. Same here.

Are they enforced? No, but the API latency is hella slow even just to the first token out.

Where it should be at for gpt-4o by model page:


– Fingerprint report from 100 trials:
gpt-4o: (28):fp_6dd05565ef, (2):fp_726d488742, (70):fp_898ac29719


For 10 trials of gpt-4o @ 2025-04-02:

Stat Average Cold Minimum Maximum
stream rate Avg: 60.350 Cold: 68.6 Min: 38.7 Max: 82.4
latency (s) Avg: 3.053 Cold: 0.9364 Min: 0.4859 Max: 22.5931
total response (s) Avg: 4.165 Cold: 1.8548 Min: 1.2884 Max: 24.2137
total rate Avg: 32.536 Cold: 34.505 Min: 2.643 Max: 49.674
response tokens Avg: 64.000 Cold: 64 Min: 64 Max: 64

Mitigations have been put in place before when things all went south in the past, but nothing reported about the API.

4 Likes

Yes, I see the same issue.

3 Likes

Yeah, I was aware of the ChatGPT issues, but it’s a wake up call for me to observe the ease and lack of status update with which our org is being throttled, if it is due to the ChatGPT issues (and if it’s intentional at all). I was encountering 429’s after 3+ calls in quick succession myself by the way, so not sure what’s going on.

1 Like

This has been escalated to OpenAI. Thanks for flagging!

3 Likes

Hi all — thanks for surfacing this! Looking into why these limits seem to have been suddenly lowered

3 Likes

Confirmed that this is an issue with the dashboard and we're working on a fix now. There should be no effect on your actual rate limits! Thanks again everyone for surfacing.

3 Likes