The API is unusable and that is unacceptable. The downtime status site is intentionally misleading

I know its not just me. Everytime I try to develop, I’m hitting either server down or that fake rate limit message.

Why is that error even being served to people who are no where near the rate limit?

Why is the downtime site so inaccurate? The metrics are literally set up to obscure issues with the servers. Its obvious that it is intentionally obscure since it would be such a black mark if everyone knew how unreliable it was. My site isn’t even deployed and I can’t use it.

The downtime site should track number of failed requests and number of high-latency requests and actually, you know, display the proportion like INDUSTRY STANDARD so that your PAYING users can tell if its an error on their end or openais.

3 Likes

Because the system is in beta and the error messages are “imprecise” due to the exponential growth of OpenAI after their public release of ChatGPT and the crazy media hype.

Same reason as above.

No, the perceived issues are not intentional. It’s always best not to assign malice to situations more easily explained by “lack of resources”, “programming errors”, etc.

Yes, it should and it is in BETA. It’s not really fair or accurate to compare a public “marketing beta” in a tornado of media and marketing generated hype to full production quality services not in an exponential growth early release hype cycle.

I understand your anguish and frustrations @silverstein.o and am only pointing out that no company can accurately predict and prepare for the exponential growth of OpenAI users this year. Hiring people takes time. Getting people “up to speed” takes time.

3 Likes

I hear you, and I fully acknowledge that the engineers at openai are not to blame per se. That being said, I think that there needs to be a separation between beta models and beta infrastructure. The infrastructure has had a long time to mature at this point and the fact that there are huge, swinging, low hanging fruit needs to be highlighted at some point. When is a reasonable time to have normal expectations? I’ve spent 10k plus on their API and I can’t even access my models.

I do actually disagree with the intentionality point. It is either intentionally misleading or the person who made it doesn’t know what metrics are useful. I don’t think someone incompetent made it and I don’t think they were rushed. Truly, the site is pretty good, they just selected stuff that means nothing. They literally have a hidden metric for what constitutes a massive outage etc…

It makes business sense to not publicize that 20-30% of requests fail or code or have high latency. Especially when the industry standard for API uptime is like 99+. Its intentional, and I’m gently suggesting please just be transparent.

I don’t think so… the last I read, OpenAI gained something like 100 Million (!) global users since December 2022. That number is off the top of my head, but the world is a big place and it seems everyone is using / hyping / playing with / experimenting with OpenAI in 2023.

Yeah, I hear ya. Over my career as a systems engineer, I have always advised businesses to not invest at this stage of the growth / hype cycle, and wait to be a “late-early adopter” for this exact risk.

Transparency is very hard to sell inside an organization which has shifted to “for profit” and has powerful C level marketing and sales employees trying to generate revenue.

What you said about “transparency” applies to just about every business, especially all the Meta owned companies, Google, etc. Big tech is not really “very big (haha)” on transparency when profit driven.

I fully understand your frustration, especially if you have up-sized your OpenAI account $10K USD

Honestly, I think it will take “months” not “days” or “weeks” for things to “settle down” inside OpenAI.

Thanks yeah. I’m way less experienced so didn’t have the experience to know what risk I was taking.

Fortunately, I’m moving to custom models now but unfortunately need funding to fully train them. Its so frustrating when I’m trying to demo to investors and it doesn’t just doesn’t work. 8 for 8 on that one ugh. videos don’t hit the same and I can’t send demos because it so frequently fails. Just venting!

At least openai doesn’t have to be an api provider anymore to survive haha

Don’t feel too bad, @silverstein.o

More developers than not have drank the koolaid are are looking to sell GPT products and service in this part of the growth / hype cycle.

This type of tech “gold rush” will create some “winners” and “a lot more loser”, and so all investments are a kind of gamble, a bet, and if you don’t take a risk you cannot win.

Personally, I would never take any product which relies on the OpenAI API (at this point in time) to big pocket investors, because I hate to be embarrassed and the infrastructure has not “caught up” to the growth / hype cycle.

As mentioned, and you have acknowledged, it is risky to be an “early adopter” in this phase of the hype cycle.

OpenAI has stated (somewhere) that their current offering is a kind of “beta, marketing, research” offering, so everyone who invests now is, without a doubt, taking a risk.

Edit Note:

Personally, I have avoided at least two commercial projects to develop using OpenAI’s API (passed). I am “truth and ethics bound” as an engineer, and so I have advised all who have contacted me to wait a bit longer, because things are changing rapidly; the ChatGPT API has not been released, the infrastructure is fragile at this time; and Google may come in an disrupt OpenAI’s current position in the market.

Haha, I’m just a boring “low risk” person, I guess … haha…

2 Likes

I wonder how jasper and other established ai’s are coping. Surely they are having similar issues unless they use different models and/or infrastructure

I checked recently and the news media reported that jasper had grown to 100K users:

Jasper News Blurb

OpenAI News Blurb

I think a very small sys admin team can handle 100K users.

100 Million users is a very different ( 1000x ) story.

I may have wrongly assumed that Jasper (and others) use OpenAI. If they do, I wonder if they are on the same infrastructure

I checked their sites, and they don’t seem to report any outages.

It is difficult to reconcile this because almost everyone has experienced timeouts or server errors in the last month.

Sorry, but I couldn’t resist answering. Horrible customer service has nothing to do with BETA. Ignoring customers, no answer or explanation, no honest notification - this is the corporate culture of the company, which is already 6 years or something.

1 Like

You seem to know the situation better than others. Do you know if the erratic limits are IP-based, API-key based or both?

From what I read, API-key based are a factor. But since the limits seems much lower in real life, I would guess it’s IP based too.

It would also be much simpler for OpenAI to to make load balancing earlier than to query individual API-key usages.

I can only guess @webtailken because I do not have this problem to be honest.

My “guess” is that it is caused mostly by the Cloudflare config on the ChatGPT side and some IP-based rules on the API side.

I “think” OpenAI is very good at ML and GPT but not so good at server farms and systems and network management (and customer service).

They are doing better. Imo. But Jasper is also known to have good deal with OpenAI, so surely the have some kind of prioritization.

1 Like

Separate servers, different billing system, different access controls, etc

Its not really a useful comparison TBH

Hey friends, I hope you’re doing well. I wanted to let you know that I’m really disappointed with the API. 3 Months has passed and the same thing, we were relying on this API and yet it’s been really unreliable with congested responses. This has been a real setback for the project I’m working on. I would really appreciate it if they could take steps to improve the service for the people who use the API and pay for it.

I really can not think on build something upon this API soon, aside that and thinking positively, I hope openai is not just using us and our data to build in the next future solutions like those we have been working, when training models and testing it.

We are thinking in use something like gpt4all, will be nice that in the future openai release a standalone model to run locally in our infrastructure or something like that.