Have a demo here: https://chat.chatgptvietnam.org you can test speed req
Hi, we are also experiencing this.
“cURL error 6: Could not resolve host: api.openai.com (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for https://api.openai.com/v1/chat/completions”
I do curl with PHP. The problem is not that the site is slow - it only takes the API a very long time to finish. So, if you are not using the stream feature, you have to wait until the end. That can result even in server timeouts. In that case, increase your time limits in the php.ini and the timeout in your Apache server settings.
Something like this could help:
// Start streaming response
curl_setopt($ch, CURLOPT_WRITEFUNCTION, function($curl, $chunk) {
echo $chunk; // output received data to client
return strlen($chunk); // return length of received data
});
Yes our servers are on Philippines, and its really slow when it terms of prompting.
Is there any solution for this?
You will have to do a lot more work - but by setting stream to true, you can give the user feedback while it is responding
It will look like the AI is typing on the screen
Is CURLOPT_WRITEFUNCTION
actually streams the files instead of waiting for full file to buffer.
I hope they fix this issue, its not suitable for a production grade if they will persist this in terms of latency issue.
I think many often people confuse network delays and data center congestion, etc with API performance.
For example, I am 12 time zones away from the US and call the OpenAI completion API, and here are the results when I time the call:
text-davinci-003
Test 1: Completions.get_reply Time: 1.247792 secs
Test 2: Completions.get_reply Time: 5.038783 secs
Test 3: Completions.get_reply Time: 1.289555 secs
Test 4: Completions.get_reply Time: 2.205132 secs
Kindly keep in mind that I am testing OpenAI APIs from the opposite side of the world than the US.
Also, if I repeat for other models, the results are similar. It’s mostly network traffic issues, not model issues, from my experience.
Having said that, lately I have noticed that text-davinci-002 is about 0.5 seconds faster than text-davinci-003 (for the same prompt), but did not test extensively.
![]()
Appendix: Example Test
Thank you
text-davinci-003
I used text-davinci-003 before and the average response was 0.5-0.8s. But because of the price it is high.
I have to use the gpt-3.5-turbo-0301 or gpt-3.5-turbo model to reduce the cost but it has the problem of slow processing time.
Will test turbo for you when back at my desk.
![]()
Thank you, I am using 30$ gpt-3.5-turbo daily it will be 300$ of text-davinci-003 LOL
![]()
Here are some test results (just now) for turbo:
gpt-3.5-turbo-0301
Test 1, Completion API Time: 1.529 seconds
Test 2. Completion API Time: 2.504 seconds
Test 3. Completion API Time: 1.557 seconds
Test 4. Completion API Time: 1.513 seconds
Test 5. Completion API Time: 1.505 seconds
Appendix: Sample Chat Completion with Time
HTH
![]()
Unexpectedly, you can see the query sample ![]()
Py test: 12s
import openai
openai.api_key = "sk-..."
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Tell the world about the ChatGPT API in the style of a pirate."}]
)
print(completion)

Your software is so good, can you give me information. I just tried with python and it took 12s
Thank you.
However, the completion time was around 13 seconds (see image) so it was not any faster than your 12s Python test case.
![]()
A few conclusions:
davinci’s return time is very fast.
The model of interest is gpt-3.5-turbo
With PHP Curl language, it takes 10-20s per query. To be able to integrate Steam into curl is very difficult.
With JavaScript - Fetch or JavaScript - jQuery triggering the data stream returns feels faster but it will expose the API key
Currently with python it takes 10-12s. (March 15th GMT +7)



