We proved the API is intentionally slow

So after writing here before, we decided to make an extensive test to confirm with ourselves that OpenAI is intentionally making the API slow, and it is not our imagination.

Our prompt produce code only.
We took 20 similar prompts, and for each one of them, we tried it via the API and the website in the same time.
Not only that, we tried the same 20, in different times of the day.
When trying online we counted also the time for the typing effect since it is not clear if its artificial yet.
We setup the API to run from a server close to OpenAI both in the West Coast and US Central.

The result is not surprising.
On average (average over the 20 prompts) the API respond time was x4.5 slower.

Here are some of the results:

  1. 38s API , 7.8s online
  2. 18s API, 6.3s online
  3. 9.8s API, 5.2s online
  4. 45s API, 8s online

So,

OpenAI - are you limiting developers? if so can you declare this and also the date you will stop doing this (if ever) ?

20 Likes

I may be in the minority here, but I don’t mind the looooooong response times, it’s still a service in development, you can still develop and test your product. This is great when the waitlist is still being processed.

Let’s say that company A gains access to GPT-4 and shorter response times, enabling to deploy their service. company B is still stuck on the waiting list and can’t launch their service. Said company might get angry and sue OpenAI claiming anti-competitive or unfair business practices.

I’d much prefer that OpenAI just open the waitlist floodgates and focuses on improving their servers, sure the response time will be worse, but only temporarily.

What are you talking about ? :slight_smile: there are no waiting lists here, everyone has slow API because they just don’t really want you to be able to do anything big enough yet before they figure out their business.
That’s fine if they can go out and say it, not telling us it’s due to “network” problems so we know we should leave.

I think they make a big mistake, because the next competitor is coming soon.

4 Likes

I’m trying to say that i think it’s great that OpenAI prioritizes giving access to as many people as possible.

The response time is proportional to the number of users, more people = more server load

I mention the waitlist because a lot of the new features does increase server load, again adding to the response time.

2 Likes

Are you a programmer? or at least did you read my post?
The same query takes 2 second on their website and 30s on the API. This has nothing to do with server load.

5 Likes

It has everything to do with server load, off course they’re going to prioritize their own service :laughing:

2 Likes

no it doesn’t, the amount of the API requests is maybe 0.1% of their total customers, this has 0 effect on them, they just don’t want you to go too fast as a developer, and they can just say it clearly.

5 Likes

In theory, the priority of the website should be higher because it can be used for training GPT, while the API becomes slower as usage increases.

However, this is one of the largest services with over billion users worldwide. It is estimated that they were not prepared with enough computing resources for this scale last year. :laughing:

2 Likes

Does anyone know if using the same api key for all requests make responses slower?

they don’t use the website to train, Sam already said they don’t. It’s more sophisticated than that.

1 Like

I also think it is highly likely that Openai intentionally slowed down our API speed, and my argument is as follows:
1、 I have been using the API every day since then, and I am particularly concerned about its response speed, so every time I ask a question, I pay attention to its speed. So, I am very sensitive and clear about his speed.
However, the API’s response underwent a mutation around April 6th Beijing time in China. His response speed suddenly changed from around 400ms to around a few seconds.
If the speed slows down due to increasing user numbers and server overload, why is it not a linear increase (although the probability of a sudden increase in a large number of users is also very low), but a sudden change? He basically increased his user base several times compared to before in less than 24 hours. This is worth everyone’s deep reflection.
2、 If it’s because there are too many people, then it should also be divided into time periods. Why is it the same slow 24 hours a day?
For the above arguments, the conclusion points to the direction where OpenAI intentionally slows down its speed.
But I really hope my inference is wrong, but can anyone overturn my argument above?

5 Likes

EXACTLY !
But nobody here seems to care, i am sure developers will make their choices when realize there are other options.

1 Like

I hope OpenAI can come out and explain what I said.

“That model is currently overloaded with other requests. You can retry your request”
This message comes in the API now for an hour long, but online there is full, fast access.

We made our decision, we are out, you can’t base a company on this and they don’t want you to.

2 Likes

It is very likely that the stack servicing the API requests is completely different than the stack servicing the Web requests. It is also very likely that the API stack is sized differently than the Web stack. There are times when I’m using the APIs and they just timeout. There are other times when it’s pretty fast. For a service as new and fresh as OpenAI’s apis, I feel like they are doing a pretty decent job and letting us have it for very cheap. No complaints here.

1 Like

I don’t see any reason why it’s not as fast as the playground, if like someone said they have over a billion users and they can handle that but they can’t handle a small amount of developers API calls. Like the name suggests ChatGpt , it’s not exactly chatting if I have to wait nearly over a minute for response, in fact developer API calls should be faster than even playground or bing. It’s clear they are deliberately doing this. Slow API calls, makes the whole thing almost useless, we would expect more transparency and a truthful response from OpenAI

5 Likes

Ja, wir haben es gefunden ist sehr langsam und nich akzeptabel.

1 Like

But…ChatGPT has feedback buttons, and anyone can help train it… :joy:

Is this an English channel? Ask GPT to translate form German to English, have some common sense!.

2 Likes

So OpenAI can support 1 billion people but not say 200 developers? What the freakin hell is wrong with you guys OpenAI? will you have the odacity to respond and fix the stupid slow API calls? or are you too busy making money?

3 Likes