So I began with a tutorial to run a program using node.js I created the chatbot-cli. It is a brief tutorial to basically speak with GPT from your cmd prompt when you NPM start the folder etc… This works great. The responses from the CMD are full and complete similar to speaking with GPT from the web as a regular user.
However, I took a step further as I wanted to implement openai into some PHP pages that I utilize. I ran with a small tutorial I found online. It worked, however, I was getting my answers cut short I noticed. After diagnosing a bit I found out that I hadn’t set max_tokens. So I added this into the json encode… that seemed to fix the issue the answers were more full… however, If I wanted a really long answer like list all the words in the alphabet that start with A it would still get cut short.
The chatbot-cli I created with node does not seem to get cut short though. It is almost as if the answers it responds with do not have any token limitations? So what is the difference? Here is the code I am using for my PHP test.
<?php $content_length = strlen($data_string) * 2; $apiKey ="removed for example"; $data = array("model" => "text-davinci-002","max_tokens" => 2000, "prompt" => "A list of all the words in the dictionary that start with A"); $data_string = json_encode($data); $ch = curl_init('https://api.openai.com/v1/completions'); curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST"); curl_setopt($ch, CURLOPT_POSTFIELDS, $data_string); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); //$data_string = ($data_string) * 2; curl_setopt($ch, CURLOPT_HTTPHEADER, array( 'Content-Type: application/json', 'Content-Length: ' . strlen($data_string), 'Authorization: Bearer ' . $apiKey )); $result = curl_exec($ch); //echo $result; $result = json_decode($result, true); $generated_text = $result['choices'][0]['text']; echo $generated_text; ?>Why do I need to specify tokens here to get longer responses but the CLI I didn’t need to? Is it just by default set to max tokens for the CLI and I just didn’t realize it?