I’m using fine-tuned models with a request volume that is comfortably within the 60 requests/min (per end-user) rate limit. Error message I receive:
status: 429
statusText: Too Many Requests
message: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact support@openai.com if the error persists.
I think this was the model loading back into our shared capacity. It should work if you retry after a couple minutes, we’re working on a few things to speed this up. It shouldn’t be an issue if you have continued usage.
Please message me if you continue to have trouble!
I get that message while actively using the engine. Typically when it’s been sitting idle I get it for about 15 seconds, then I’m okay for a little while, then it tends to 429 me occasionally. I’m pretty sure that in the only use and I’d say I do less than 2 requests a second, so maybe they shared pool things I’m not really busy
I also notice that it’s on each fine tune I have this experience, so leading a second tune means I’m likely to have to wait 10 or so seconds then I get results.
@luke, I continue to get this error btw. I experimented with various scenarios and it’s not clear what causes it. It appears to be very inconsistent. Sometimes after longer periods of inactivity, sometimes when there are ~ 2 requests with 5 seconds. I run 4 different fine-tuned models and it happens across all 4.
I am getting the same message when running my python scripts with open.ai in Juypter notebooks, is frustrating as all other APIs I am hitting are not causing errors only this one which slows me down.