I’m getting this message even though there were originally no runs today. I’m getting billed for every error as I try to fix it - getting billed for an error is a horrible UX and since I’ve done nothing that I know of to get rate limited it is incredibly frustrating - am I missing something?
Rate limit reached for gpt-4-turbo-preview in organization org-[redacted] on tokens per min (TPM): Limit 10000, Used 7095, Requested 4096. Please try again in 7.146s. Visit https://platform.openai.com/account/rate-limits to learn more.
Try again in 30 seconds and get another similar error… uggg.
You could try to generate a new API Key and see if that helps, in case your existing API key somehow got exposed. Also you could double check the security and firewall settings on your server if you’re running one, just to be sure no one is getting access to your key.
Thanks folks - I removed some of the files and it seems to be working now - the error was rate limiting; however, the issue seems to be the number of tokens I’m passing into the assistant. Switching to a langchain/vector db to try to address the scale issue of my data.