I was working with fine-tuning API, it started to become unstable 15 mins ago. Does anyone aware of any incident?
https://status.openai.com/ does not show any service degradation if there is any on their end. What are the errors you’re receiving?
I am using sample datasets (fairly simple datasets). Usually, it was taking less than 1 minute to process it, now it takes ages. Also, after getting a “success” result, whenever I run my completion I get “That model is still being loaded. Please try again shortly.” (status code 429) response
The HTTP 429 Too Many Requests response status code indicates the user has sent too many requests in a given amount of time (“rate limiting”). A Retry-After header might be included to this response indicating how long to wait before making a new request.
Source: 429 Too Many Requests - HTTP | MDN
Are you sending a lot of requests at one time? Are you making requests at the same frequency as before? It seems there’s a rate limit threshold that has been reached here.
I don’t think it is about rate limiting. I send max 10 request per minute.
Now I started to get “The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact support@openai.com if the error persists.” I think something going on the server. I’ll take a coffee break, let’s see if it is going to be fixed until then
I’m experiencing the same while using fine-tuning models
Facing the same “too many requests” issue with a fine tune model. Rate limiting ruled out (2 requests within 30 seconds).