Every prompt asking for a list of "something" gives non formatted response at the end of the answer

I am using text-davinci-003 model. Every list i ask for the bottom of the list is not formatted like the fist 5 items or so. Example:

{
  text: '\n' +
    '\n' +
    '1. London, UK \n' +
    '2. Paris, France \n' +
    '3. Madrid, Spain \n' +
    '4. Rome, Italy \n' +
    '5. Berlin, Germany \n' +
    '6. Vienna, Austria \n' +
    '7. Amsterdam, Netherlands  \n' +
    '8. Brussels, Belgium  \n' +
    '9. Prague, Czech Republic  \n' +
    '10. Copenhagen, Denmark  \n' +
    '11. Istanbul Turkey   \n' +
    '12. Helsinki Finland   \t\t\t\t\t     13 Barcelona Spain      14 St Petersburg Russia       15 Lisbon Portugal        16 Athens Greece          17 Dublin Ireland         18 Budapest Hungary        19 Stockholm Sweden       20 Warsaw Poland',
  index: 0,
  logprobs: null,
  finish_reason: 'stop'
}

It would be interesting to see your input. But that does look odd.

To overcome you might be explicit about how you want your output. I have found the API to be very familiar with markdown. You could ask for output to use markdown ordered list.

Also if you temperature is high that might account for the odd output.

I am just asking the api to give me a list of 20 cities in the us. or 20 states in the us. If i ask for only 5 its fine.

{
  text: '\n\n1. Ontario \n2. Quebec \n3. British Columbia \n4. Alberta \n5. Manitoba',
  index: 0,
  logprobs: null,
  finish_reason: 'stop'
}

All I can say is that using the API via the playground I can not repeat the issue you observe. Even with a high temperature.

const { Configuration, OpenAIApi } = require("openai");

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

const response = await openai.createCompletion({
  model: "text-davinci-003",
  prompt: "please list 20 cities in the United States\n\n",
  temperature: 1,
  max_tokens: 100,
  top_p: 1,
  frequency_penalty: 0.2,
  presence_penalty: 0,
});

//

  1. New York City
  2. Los Angeles
  3. Chicago
  4. Houston
  5. Phoenix
  6. Philadelphia
  7. San Antonio
  8. San Diego
  9. Dallas
  10. San Jose
  11. Austin
  12. Jacksonville
  13. San Francisco
  14. Indianapolis
  15. Columbus
  16. Fort Worth
  17. Charlotte
  18. Detroit
  19. El Paso
  20. Seattle

Hi @rickaz23

Works fine for me (also), however, I have not yet updated my “OpenAI Lab Code” to account for line breaks in the API response / output:

OpenAI Lab

Here are the params, I used:

OpenAI Lab

Hope this helps as a datapoint.

Cheers

I am not using the playground. I am Using node.js server backend and I am console.log the response from the model.

1 Like

That may be the cause of your problem, @rickaz23

You may be seeing the raw response in JSON format in the console:

{
  text: '\n\n1. Ontario \n2. Quebec \n3. British Columbia \n4. Alberta \n5. Manitoba',
  index: 0,
  logprobs: null,
  finish_reason: 'stop'
}
You need to extract the “text” key
  text: '\n\n1. Ontario \n2. Quebec \n3. British Columbia \n4. Alberta \n5. Manitoba',

FWIW, I extract the “text” key in a completion response like this:

Ruby

output = response['choices'].map { |c| c['text'] }

Chatty converts this Ruby code to JS as follows:

// Assume that the variable 'response' contains the JSON response from the OpenAI API

// Get the array of choices from the response
let choices = response['choices'];

// Use the map method to get an array of the text from each choice
let output = choices.map(function (c) { return c['text']; });

console.log(output);