Will there be a PRO API account?

I’m getting a lot of 429 from the API, and I’d be willing to pay myself away from them if possible. Will there be a PRO API account in the future? Similar to the PRO ChatGPT accounts …?

1 Like

Yeah… I feel the same. I pay but it’s no better than the free tokens. And just getting worse.

OpenAI will see themselves being abandoned because the service is essentially unavailable.

Hey folks, please see the status of the API: https://status.openai.com for context, the API has been available 99.38% of the time in the last 90 days.

being abandoned because the service is essentially unavailable.

Not sure what you mean here. There is no Pro API account since the API requires payment, if you pay for the API, you are already on the Pro account : )

2 Likes

The responses became extremely slow and I see a lot of 429 errors (but I’m not even close to a rate limit, since I’m just in developing-mode). Something is definitely going on.

1 Like

Yeah, the model was down, should be better now as services recover.

Hi Logan, I’m getting dozens of 429 every single day. It’s text-davinci-003 that’s overloaded. You’re doing an amazing job, and I understand you’re working around the clock to keep things running, and that it’s a “zoo” with you guys these days due to users signing up - But your 99 statistics is not representative of what we’re seeing out here where we’re using it. My guesstimate would be it’s around 80% …

When that’s said, I understand your issues, and I must say your product is simply amazing. I haven’t had this much fun with software development since I implemented Logo on my Oric 1 in 1982 :slight_smile:

I think it is both geo and time based.

In my geo and time zone I almost never experience a 429, even when many people in this community are complaining about it at the same time.

However, my thought is that the 429 errors are being generated by Cloudflare and not by OpenAI servers.

This is why, in my view, the OpenAI status pages and updates show everything is green at time when many customers are seeing 429s, 1015s or even 408s.

If I was on the OpenAI infrastructure team, I would disable Cloudflare (at times, not always) and develop situational awareness into exactly how Cloudflare is impacting OpenAI customer requests and determine if the benefits of Cloudflare outweigh the harm.

Sometimes “too much security” and “too much control” can cause more damage than actual malicious actors, and it appears this may be the case with OpenAI and Cloudflare at this point in the OpenAI growth cycle.

2 Likes

You might be correct, but roughly half the times it’s returning 429 to me, it says; “That model is currently overloaded”. I doubt it originate from CloudFlare. It seems to be related specifically to text-davinci-003 in fact … :confused:

Yeah, I have no idea really. Cloudflare errors can be customized by their customers; but you you are correct I think.

Hope it improves soon for all concerned.

:slight_smile:

My main problem at the moment is that 90% of the time I don’t get a full text completion. I use opanai with api key and also the paid version.

EDIT: ok, my fault to handle error events on stream ^^

I would like to add my voice of concern in this posting too.

I am located in Germany, our server is making api calls from AWS Frankfurt servers. For last few days, especially at the morning time here in Germany, OpenAI endpoints made a lot of failures. — even when the status page shows “GREEN” and all incidents were resolved.

Something “GEO” based restrictions are kicking in is my guess. The api was almost unusable from (some? all?) EU countries in the peak time (day time at central european time). Many 429s (even we were far away from rate limit), combined with very slow response time (e.g. such as over 30 seconds, that used to be handled in 3 seconds).

The issue seems to disappear in the evening time, at least here in Germany. (e.g. almost no 429s, and response time back in normal range again.) Also, api processing is smoother today than yesterday.

What I am concerned heavily is that, maybe OpenAI don’t realize this correctly? Status page showing 99.38% uptime for marketing is okay. But Dev advocate member using that number makes me feel gravely bad about it.

The numbers don’t tell the quality of the API calls here in EU lands. Stability of OpenAI API service is a great source of frustration for our team for the moment.

(note – endpoint & model we were using are davinci-3 and completion API)