Hello,
Sorry if asked before, couldn’t find anything…
according to this blog: OpenAI Platform
“Note that unsuccessful requests contribute to your per-minute limit, so continuously resending a request won’t work.”
So prompts sent after TPM limit hit still count against your TPM limit, my question is does it also get charged to your account? As in are you still paying the $.03 /1k token for gpt-4 on prompts sent that get rejected for TPM overload? If so I’ll have to be a little more clever about rate limit handling…
I don’t know categorically what happens, but here’s my recent experience, since I came to the forums to discuss the same things. So hopefully someone can learn from my mistakes. I was unaware of the TPM rate limit, so I fired off a chain prompt that fires off 61 x 6000 token chunks one at a time. Hit the TPM rate limit after 11 requests, and the remaining 50 were rejected. It looks like OpenAI billed me for 12.
After that, I threw in a per-call cooldown of 13 seconds when the model selected is “gpt-4”, and I seem to be coming in around 1/2 the total TPM, which for my little account is 40000 tokens per minute.