Conversational API HTTP 429

Our development team faced a problem. Neither TPM nor RPM model limits are reached, after 6 calls endpoint returns HTTP 429 (too many requests) . Open AI API is called directly from C# code using HttpClient. Exaple of usage is uploaded. Appreciate if someone can help with this issue. Below is an example when it happened. Dev team on Team subscription

Sometimes rate limits may be triggered by short bursts of rapid requests, check if this is not an issue.

Just to note: A ChatGPT subscription, the only place where you’ll be “Team”, has no bearing on your ability to use - and to pay for - API-based services.

If you’ve gone through the new “start building” onboarding steps on platform.openai.com, there is a possibility you are given some “first tests” API calls for free (this is not documented, nor quite reliable). That shouldn’t entitle you to “search” model for free tier, so that might not be it.

But the simple fact is: the API is pay-per-use, where you must have purchased a credit balance to fund your usage.

If you have funded your use: note the quite low pooled token-per-minute rate of 30000 tokens total for gpt-4o at tier-1 organization, counting your input and output. If you empty your pool faster than the rate limit algorithm fills it back up, you’ll get cut off (and the message body in the error response will have more information). The calls shown shouldn’t trigger that.

Thank you, Jay

But we have credit