On the website that tracks OpenAI API response times, I saw that the free trial is faster than the paid account. I tried it myself and found that it is indeed true. Why is this so? Is it unfair to paid users? Moreover, there is a significant speed difference, with the free trial being on average 2-3 times faster than paid users.
This is not true, there is no inherent speed difference in that free trial vs paid account. If the website has something that shows different speeds it’s likely because they wrote some benchmarking code that is incorrect.
I can prove that the following is my testing using chatbot-ui with GPT-3.5-turbo. On the left is my own API key, with a paid account and ChatGPT Plus subscribe. On the right is my friend’s API key, which is not bound to a credit card and has a free credit of $5.
There is no difference in the speed on the API side.
Both your accounts are likely running on different datacenters. The resource constraints affect different users at different times.
Your network latency also makes a huge difference in the speed of the data stream.
I also found the same problem. I used to have two accounts, one for free and the other for payment. I used two accounts on the same machine to call OpenAI and clearly found that the free account was much faster than the paid account. After I tied my free account to a bank card, the account also slowed down.
I hope the openai staff can explain
I have conducted tests before and found that a free account is indeed much faster than a paid account. I tested two accounts on the same machine at the same time. Every time it’s like this
I switched over to paid yesterday and noticed my Siri shortcut takes longer to generate the gpt3.5 response than when i was using my $18 of credits
Why say anything without any knowledge? Have you tested this? Because it’s clearly a thing. It might not be intentional or said anywhere which is why bringing these things up are good so they can fix the problem if it is a problem
Judging from this result, it is true that free time-consuming is relatively short. Can someone please explain?
I wanted to say “There’s no difference”
… but since people are experiencing it I’ll just be quiet and follow the thread…
We have the exact same issue. Our ChatGPT 3.5 Turbo API calls were taking a VERY long time, so on the exact same machine with the same code we just replaced our API key with a free trial key. The response time is easily 4 to 5 times faster. This is a serious problem – can anyone at OpenAI can give an update? We’re getting ready to pause development of our product because of it and I know there are similar discussions going on at other companies having the same problem. Many thanks for any feedback.
I am also facing the same issue. Thanks for bringing this up.