Need to use gpt-4 module in php

$curl = curl_init();
curl_setopt_array($curl, array(
CURLOPT_URL => ‘././text-davinci-003/completions’,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => ‘’,
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 0,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => ‘POST’,
CURLOPT_POSTFIELDS => ‘{
“prompt”: "’ . $prompt . ‘",
“temperature”: 0.4,
“max_tokens”: 3000,
“top_p”: 1,
“frequency_penalty”: 0,
“presence_penalty”: 0
}’,
CURLOPT_HTTPHEADER => array(
‘Content-Type: application/json’,
'Authorization: Bearer ’ . $api_key . ‘’
),
));

            $response = curl_exec($curl);
      
            curl_close($curl);

Here is my code. I need support. My problem is listed below:
When I use this URL: […/text-davinci-003/completions], it will respond up to 1000 words or around 2500 tokens. If we need more than 10000 tokens, which URL do we need to use? We have an API key.
Can you please modify my code as soon as possible? Your early response in this matter is appreciated.

ChatGPT could’ve helped you with that!

But here we go:

Firstly, I personally wouldn’t use text-davinci-003 as it is a legacy model and will probably be phased out in the future. Also, it’s more expensive to use. For the same price you can use GPT-4-Turbo, which is much more powerful, and more up-to-date.

But on to your issue, it’s likely that you’re hitting the context window length with the question+response. In other words, the response it getting too big and is cut short.
Your solution is to use another model with a greater context length. There is gpt-4-1106-preview (aka “GPT-4-Turbo” with 128k context length, cheaper, newer) and gpt-4-32k (Older GPT-4, with 32k context length, more expensive). Take a look at the pricing, too, since they have different pricings! My tip would be using GPT-4-Turbo, unless you somehow don’t have access to it.

Cheers!