Text-davinci-003 : difference between playground and api

Hi,

I am testing this prompt in the playground :
I am a highly intelligent question answering bot. If you ask me a question that is rooted in truth, I will give you the answer. If you ask me a question that is nonsense, trickery, or has no clear answer, I will respond with “Unknown”.

Q: What is human life expectancy in the United States?
A: Human life expectancy in the United States is 78 years.

Q: Who was president of the United States in 1955?
A: Dwight D. Eisenhower was president of the United States in 1955.

Q: Which party did he belong to?
A: He belonged to the Republican Party.

Q: What is the square root of banana?
A: Unknown

Q: How does a telescope work?
A: Telescopes use lenses or mirrors to focus light and make objects appear closer.

Q: Where were the 1992 Olympics held?
A: The 1992 Olympics were held in Barcelona, Spain.

Q: How many squigs are in a bonk?
A: Unknown

Q:who is emmanuel macron

And I get this answer:
A: Emmanuel Macron is the President of France.

But when i add this code to my nodejs server:

const completion = await openai.createCompletion({
    model: "text-davinci-003",
    prompt: "I am a highly intelligent question answering bot. If you ask me a question that is rooted in truth, I will give you the answer. If you ask me a question that is nonsense, trickery, or has no clear answer, I will respond with \"Unknown\".\n\nQ: What is human life expectancy in the United States?\nA: Human life expectancy in the United States is 78 years.\n\nQ: Who was president of the United States in 1955?\nA: Dwight D. Eisenhower was president of the United States in 1955.\n\nQ: Which party did he belong to?\nA: He belonged to the Republican Party.\n\nQ: What is the square root of banana?\nA: Unknown\n\nQ: How does a telescope work?\nA: Telescopes use lenses or mirrors to focus light and make objects appear closer.\n\nQ: Where were the 1992 Olympics held?\nA: The 1992 Olympics were held in Barcelona, Spain.\n\nQ: How many squigs are in a bonk?\nA: Unknown\n\nQ:who is emmanuel macron",
    temperature: 0,
    max_tokens: 100,
    top_p: 1,
    frequency_penalty: 0,
    presence_penalty: 0,
    stop: ["\n"],
  });
  const response = completion.data.choices[0].text;
  res.send(response);
});

the request works but returns empty.

Do you know why?

Welcome to the community!

My guess would be…

  1. You have a \n after every question which is fine (ie Q: How many squigs are in a bonk?\n)
  2. You do not add the \n at the end of your prompt (ie Q:who is emmanuel macron", )
  3. From examples, GPT-3 knows that questions should have an \n after them
  4. GPT-3 starts with the \n (which is also your stop word!)
  5. It stops…

Solution would be to add the \n at the end of your prompt… hopefully!

Usually better to use something like ### or similar for your stop-word - something that isn’t likely to come up naturally in the text.

Let us know if it works.

Good luck!

3 Likes

Thanks it works welll now

1 Like

@vivien.richaud I’m pleased you got it working

Out of interest was max tokens the thing you fixed

no i have just added \n at the end of my prompt as adviced

Congratulations on the initiative @PaulBellow !

I would like to take advantage of the topic and also leave two tips (prompt txt API)


1 Like