Looping through list of prompts not working

I am trying to get a long article written via the gpt-3.5-turbo API (Completions) using PHP. So first I ask it to generate an outline for a topic and then I loop through the outline and ask it to elaborate on each and I intend to eventually combine them all into one article. But it always stops in the middle of the loop. In my script it mostly stops between 5th to 8th iteration, and via Postman, it stops after 2nd iteration. It never returns an error. I doubt its hitting the API rate limit because the outline is only about 20-40 points long usually and each response is of about 500 tokens.

Outline array:-

[0] => Definition of ADHD
[1] => Prevalence and impact of ADHD
[2] => Common symptoms of ADHD
[3] => Diagnostic criteria for ADHD
[4] => Inattentive type
[5] => Hyperactive-impulsive type
[6] => Combined type
[7] => Genetic factors
[8] => Environmental factors
[9] => Brain structure and function
[10] => Anxiety disorders
[11] => Mood disorders
[12] => Learning disabilities
[13] => Medications for ADHD
[14] => Behavioral therapy
[15] => Support and accommodations
[16] => Organization and time management techniques
[17] => Exercise and physical activity
[18] => Mindfulness and stress reduction techniques
[19] => Challenges faced in academic settings
[20] => Strategies for success in school and work
[21] => Symptoms and challenges in adulthood
[22] => Strategies for managing ADHD as an adult
[23] => Support groups and organizations
[24] => Resources for individuals and families


$api_key = “API_KEY”;

foreach ($outlinePoints as $point) {
$prompt = “Elaborate on the sub-point related to ‘ADHD’: $point”;
$messages = [[“role” => “system”, “content” => “You are a helpful assistant.”], [“role” => “user”, “content” => $prompt]];
$ch = curl_init($endpoint);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
“Authorization: Bearer $api_key”,
“Content-Type: application/json”
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode(array(“model” => “gpt-3.5-turbo”, “messages” => $messages)));
$response = curl_exec($ch);
if (curl_errno($ch)) {
echo "cURL Error: " . curl_error($ch);
} else {
$response = json_decode($response, true);
echo $response[‘choices’][0][‘message’][‘content’];

I am using the ‘v1/chat/completions’ endpoint.

Any idea why it won’t finish the loop?

Have you tried adding a split second sleep at the end of the loop (500ms)?
This sounds like a race condition to me.

1 Like

You can also check the API headers to rule out issues on the API side.

I applied this:
$headers = curl_getinfo($ch);

But it only prints headers of successful curl calls, not of the one that failed.

Here is one of the successful ones:

Array ( [url] => (LINK REMOVED FOR POSTING HERE)/v1/chat/completions [content_type] => application/json [http_code] => 200 [header_size] => 749 [request_size] => 199 [filetime] => -1 [ssl_verify_result] => 0 [redirect_count] => 0 [total_time] => 16.108945 [namelookup_time] => 0.000349 [connect_time] => 0.200358 [pretransfer_time] => 0.208132 [size_upload] => 190 [size_download] => 2662 [speed_download] => 165 [speed_upload] => 11 [download_content_length] => 2662 [upload_content_length] => 190 [starttransfer_time] => 0.208133 [redirect_time] => 0 [redirect_url] => [primary_ip] => [certinfo] => Array ( ) [primary_port] => 443 [local_ip] => [local_port] => 65298 [http_version] => 3 [protocol] => 2 [ssl_verifyresult] => 0 [scheme] => HTTPS [appconnect_time_us] => 408434 [connect_time_us] => 200358 [namelookup_time_us] => 349 [pretransfer_time_us] => 208132 [redirect_time_us] => 0 [starttransfer_time_us] => 208133 [total_time_us] => 16108945 )

Yes I tried usleep(500000); but its not making a difference. I have also tried 5 second delay and that didn’t work either. By the way, I also tried doing it without a loop, it still stops after a few calls.

Have you tried not closing the connection immediately and instead maintaining the connection?

I know with the OpenAI Python client library they actually maintain a session instead of opening and closing connections rapidly.

There is a PHP library that can handle the heavy lifting for you. I would try with that and see if it solves your issue.

I tried not closing the connection inside loop and it still stops after exactly 6 iterations. Just so you know, each response is of about 500-600 tokens i.e around 350 words i.e. around 2500 characters (in case that matters). Here is a screenshot.

I will now try to use the PHP library you suggested.

P.S. Thanks for looking into this so far.

1 Like

No problem. Hope the library solves your problem. Please report back!

I am using symfony process for stuff like that.

starting multiple processes at once in parallel (you need to keep track on overall tokens so you know when to stop spawning new processes).

Each process starts a symfony command with an id of the task encapsulated in a data transfer object so I can scale up once i get a higher tpm rate to get over 1000 processes per minute (running that on an apache server), which then calls a service which then uses multiple api connectors for multiple model deployments on azure (i am using that library openai-php/client there as well. But there is a timeout in case i am using multiple requests in one service - havn’t figured out why either).

But you can also start with a cronjob…

This is likely do to concurrent requests. At most you can have 5 concurrent requests, so on the 6th request it simply causes an error breaking out of the loop and stopping execution.

But its a loop, so it makes one curl request at a time, so how would it be concurrent?

And also, could you please confirm if this 5 concurrent requests limit is from the side of the API or if you are guessing that its from the side of my server?

Thank you for the suggestion, let me try that as well.

Did you try the client library?

What version of PHP are you using? Here’s an intersting tidbit about curl_close (it doesn’t do anything anymore). Ah, the joys of PHP programming.


(PHP 4 >= 4.0.2, PHP 5, PHP 7, PHP 8)

curl_close — Close a cURL session


curl_close(CurlHandle $handle): void


This function has no effect. Prior to PHP 8.0.0, this function was used to close the resource.

Closes a cURL session and frees all resources. The cURL handle, handle, is also deleted.

So it looks like you aren’t actually closing the connection and what @servnx says may be true (no idea about concurrent limits but I would imagine OpenAI would return an immediate error so I’m leaning towards something on the clientside).

If you want to go barebones you can use the library that they used to make the call:


$client = new GuzzleHttp\Client();
$res = $client->request('GET', 'https://api.github.com/user', [
    'auth' => ['user', 'pass']
echo $res->getStatusCode();
// "200"
echo $res->getHeader('content-type')[0];
// 'application/json; charset=utf8'
echo $res->getBody();
// {"type":"User"...'

// Send an asynchronous request.
$request = new \GuzzleHttp\Psr7\Request('GET', 'http://httpbin.org');
$promise = $client->sendAsync($request)->then(function ($response) {
    echo 'I completed! ' . $response->getBody();

But… Honestly… I don’t even know. I tried to find how they are closing connections and it’s the same way:

    public function release(EasyHandle $easy): void
        $resource = $easy->handle;

        if (\count($this->handles) >= $this->maxHandles) {
        } else {
            // Remove all callback functions as they can hold onto references
            // and are not cleaned up by curl_reset. Using curl_setopt_array
            // does not work for some reason, so removing each one
            // individually.
            \curl_setopt($resource, \CURLOPT_HEADERFUNCTION, null);
            \curl_setopt($resource, \CURLOPT_READFUNCTION, null);
            \curl_setopt($resource, \CURLOPT_WRITEFUNCTION, null);
            \curl_setopt($resource, \CURLOPT_PROGRESSFUNCTION, null);
            $this->handles[] = $resource;

Maybe someone who is more fluent in PHP can chime in. Maybe it’s the manual reset they do each time inside the else statement? I did notice that it will typically cycle the connections so it could be that it’s one of those wonderful /* shouldn’t work but it does, so fuck it */ situations

There is a comment in the PHP docs that provides a parameter to prevent re-using a connection.

curl_setopt($curl, CURLOPT_FORBID_REUSE, TRUE);

true to force the connection to explicitly close when it has finished processing, and not be pooled for reuse.


Considering that curl is syncronous i suppose you are correct.

Yes, i can confirm that OpenAI API has a 5 concurrent request limit.

Where in the documentation does it say this? I would be blown away if OpenAI only allows 5 “concurrent requests” and also NOT return any error codes as the OP has indicated.

Now, I’m being nitpicky but I think it’s fair to say that if someone is trying to open new connections and not closing them properly that can be an issue, but concurrent/parallel requests is commonly done.

Still. I am genuinely interested in this as well because I have a chatbot using a serverless stack and don’t maintain connections when a different user sends a different message.

Just to clarify: I’m not saying this isn’t true (mainly because I think you meant to say concurrent connections). I just think saying “I confirmed it” is as meaningful as Michael Scott “declaring bankruptcy”

I can confirm OpenAI has a nice script here to do six million tokens a minute if that’s your rate limit and 25 requests a second.


This is a question of your script hosting situation, not a question of OpenAI.
Given that you use PHP, I expect you run this from a web page, and the web server or proxy that runs this page, probably has a timeout for how long it allows your script to run.

The timeout occurs even when setting timelimit to 0.
It is a server timeout even on azure.

I think that curl close really is a good hint.

The ‘openai-php/client’ library suggested above is what I am trying now and if that fails too, I will try your suggestion and will let you know how it works out. :slight_smile:

I tried the library you suggested, and things definitely improved, as now, it does about 20 iterations, but again, it is not doing the whole loop.

foreach ($cleanedPoints as $point) {
    $prompt = "Elaborate on the sub-point related to '$keyword': $point";
    $response = $client->completions()->create([
        'model' => 'gpt-3.5-turbo-instruct',
        'prompt' => $prompt,
        'max_tokens' => 4000,
        'temperature' => 0
    foreach ($response->choices as $result) {