Curl is slow, 15-30s/req

My code example

    $url = '';
    $model = "gpt-3.5-turbo";
    $header = array(
        'Authorization: Bearer '.$API_KEY,
        'Content-type: application/json',

    $params = json_encode(array(
        'messages'        => $data,
        'model'         => $model,
        'temperature'   => 1,
        'max_tokens'    => 1500,
        'top_p'         => 1,
        'frequency_penalty' => 0,
        'presence_penalty'  => 0

    $curl = curl_init($url);
    $options = array(
        CURLOPT_POST => true,
        CURLOPT_HTTPHEADER =>$header,
        CURLOPT_POSTFIELDS => $params,
        CURLOPT_SSL_VERIFYPEER => false,
    curl_setopt_array($curl, $options);
    $response = curl_exec($curl);

Very slow response time from 15-30s

Update 08/03/2023: 8-9s/req


Have a demo here: you can test speed req

Hi, we are also experiencing this.

“cURL error 6: Could not resolve host: (see for”

Are you in the countries where the ping and curl errors arrive

I do curl with PHP. The problem is not that the site is slow - it only takes the API a very long time to finish. So, if you are not using the stream feature, you have to wait until the end. That can result even in server timeouts. In that case, increase your time limits in the php.ini and the timeout in your Apache server settings.

Something like this could help:
// Start streaming response
curl_setopt($ch, CURLOPT_WRITEFUNCTION, function($curl, $chunk) {
echo $chunk; // output received data to client
return strlen($chunk); // return length of received data


Yes our servers are on Philippines, and its really slow when it terms of prompting.

1 Like

Is there any solution for this?

1 Like

You will have to do a lot more work - but by setting stream to true, you can give the user feedback while it is responding

It will look like the AI is typing on the screen

1 Like

actually streams the files instead of waiting for full file to buffer.

I hope they fix this issue, its not suitable for a production grade if they will persist this in terms of latency issue.

I think many often people confuse network delays and data center congestion, etc with API performance.

For example, I am 12 time zones away from the US and call the OpenAI completion API, and here are the results when I time the call:


Test 1:  Completions.get_reply Time: 1.247792 secs
Test 2:  Completions.get_reply Time: 5.038783 secs
Test 3:  Completions.get_reply Time: 1.289555 secs
Test 4:  Completions.get_reply Time: 2.205132 secs

Kindly keep in mind that I am testing OpenAI APIs from the opposite side of the world than the US.

Also, if I repeat for other models, the results are similar. It’s mostly network traffic issues, not model issues, from my experience.

Having said that, lately I have noticed that text-davinci-002 is about 0.5 seconds faster than text-davinci-003 (for the same prompt), but did not test extensively.


Appendix: Example Test

1 Like

Thank you


I used text-davinci-003 before and the average response was 0.5-0.8s. But because of the price it is high.
I have to use the gpt-3.5-turbo-0301 or gpt-3.5-turbo model to reduce the cost but it has the problem of slow processing time.

Will test turbo for you when back at my desk.


1 Like

Thank you, I am using 30$ gpt-3.5-turbo daily it will be 300$ of text-davinci-003 LOL :rofl: :rofl:

Here are some test results (just now) for turbo:


Test 1, Completion API Time: 1.529 seconds
Test 2. Completion API Time: 2.504 seconds
Test 3. Completion API Time: 1.557 seconds
Test 4. Completion API Time: 1.513 seconds
Test 5. Completion API Time: 1.505 seconds

Appendix: Sample Chat Completion with Time



1 Like

Unexpectedly, you can see the query sample :kissing_closed_eyes:

Py test: 12s

import openai

openai.api_key = "sk-..."

completion = openai.ChatCompletion.create(
    messages=[{"role": "user", "content": "Tell the world about the ChatGPT API in the style of a pirate."}]



Same results from here at this time:


1 Like

Your software is so good, can you give me information. I just tried with python and it took 12s