Are there new GPT-4o Limits?

That’s the free tier request-per-minute rate. Since your tier showing at the top means you didn’t accidentally pick the wrong org, looks like something broke for the organization or the backend. Same here.

Are they enforced? No, but the API latency is hella slow even just to the first token out.

Where it should be at for gpt-4o by model page:


– Fingerprint report from 100 trials:
gpt-4o: (28):fp_6dd05565ef, (2):fp_726d488742, (70):fp_898ac29719


For 10 trials of gpt-4o @ 2025-04-02:

Stat Average Cold Minimum Maximum
stream rate Avg: 60.350 Cold: 68.6 Min: 38.7 Max: 82.4
latency (s) Avg: 3.053 Cold: 0.9364 Min: 0.4859 Max: 22.5931
total response (s) Avg: 4.165 Cold: 1.8548 Min: 1.2884 Max: 24.2137
total rate Avg: 32.536 Cold: 34.505 Min: 2.643 Max: 49.674
response tokens Avg: 64.000 Cold: 64 Min: 64 Max: 64

Mitigations have been put in place before when things all went south in the past, but nothing reported about the API.

4 Likes