250,000 davinci tokens / minute (and proportionally more for smaller models)
We’ve been under pay-as-you-go for more a couple of weeks and we’re no where close to that limit however we are randomly receiving a 429 error:
error": { “message”: “The server had an error while processing your request. Sorry about that!”, “type”: “server_error”, “param”: null, “code”: null } } ]
Exception: {“ClassName”:“System.Net.WebException”,“Message”:“The remote server returned an error: (429) Too Many Requests.”
Just this morning the error was received after the 3rd request (each request contains around 1K tokens in total).
Also, we’re not based in the US, is there a way we can somehow upgrade our account and receive faster response times? Davinci takes an average of 10seconds and sometimes it exceeds 30 seconds to get a response.
I guess OpenAI got too drunk on the success of ChatGPT or just doesn’t care about Playground anymore, because they really are blind to the problems especially with text-davinci-003.
OpenAI asks God if it can go to heaven? OpenAI develops ChatGPT, but when playground users say fix modules OpenAI answers nothing and pretends everything is OK, so OpenAI must not go to heaven!