My prompt was: “make a table in html with a list of 10 dogs”.
The raw response from the API was this:
{
text: '\n' +
'\n' +
'<table> \n' +
' <tr> \n' +
' <th>Dog Breeds</th> \n' +
' </tr>\n' +
'\n' +
' <tr><td >Golden Retriever</td></tr> \n' +
'\n' +
' <tr><td >Labrador Retriever</td></tr>>\n' +
'\n' +
' <tR><Td >German Shepherd Dog </Td></tR>>\n' +
'\n' +
' \t<TR ><TD >Bulldog </TD></TR >> \t\t\t \n' +
'\n' +
' <<Tr ><Td >Beagle </Td></Tr >> \t \n' +
'\n' +
' << Tr >< T d & g t ; P o m e r a n i a n & l t ; / T d></ Tr >> . .
. .. ... .... ...... ........
.......... .............. .................. .................... ...................... ......................... ................................... ..................................... ....................................... ***************************************',
index: 0,
logprobs: null,
finish_reason: 'stop'
}
Why would the API send a response like this? I am using model: text-davinci-003
1 Like
The "\n"s are to be expected because of the formatting, but the last part of this response is the issue it seems. I tried a few requests and setting temperature to 0.7 seemed to yield some pretty good results. In general, generative models don’t always give the right answer the first time, you need to try a couple of times to get the right result (you can also set n
to be > 1 so that there are multiple option to choose from).
Hi, thanks for the response i will try 0.7 and see what i get. Also do you know why the models response sends back “\t” for no reason?
I think it’s just random noise from training data. Trying another few prompts should give better results.
1 Like
Here is what i am getting with the temp set to 1.0. also here the setting i am sending to the model. I am seeing a pattern here all the prompts asking for a list gets a response like this every time. the last few items of the list get “\t” and no “\n” inserted
temperature: 1, // Higher values means the model will take more risks.
max_tokens: 1000, // The maximum number of tokens to generate in the completion. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
top_p: 0.5, // alternative to sampling with temperature, called nucleus sampling
frequency_penalty: 1.0, // Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
presence_penalty: 1.0,
{
text: '\n' +
'\n' +
'1. New York Yankees \n' +
'2. Los Angeles Lakers \n' +
'3. Dallas Cowboys \n' +
'4. Golden State Warriors \n' +
'5. Boston Red Sox \n' +
'6. Chicago Cubs \n' +
'7. New England Patriots \n' +
'8. Pittsburgh Steelers \n' +
'9. Manchester United \n' +
'10. Barcelona FC \n' +
'11. Real Madrid CF \n' +
'12 .Chicago Bulls \t\t\t\t\t\t 13 .Los Angeles Dodgers 14 .Philadelphia Eagles 15 .Houston Rockets 16 .Green Bay Packers 17 .New York Giants 18 .San Francisco 49ers 19 .Washington Redskins 20 Toronto Maple Leafs',
index: 0,
logprobs: null,
finish_reason: 'stop'
}