Remember previous response and use it in current text completion

Can I use the API with text-davinci-003 in the following way? :

  1. Prompt asking for an outline
  2. Prompt asking to use the previous outline to generate a text.

IMPORTANT: I do not want to repeat the outline text in the second prompt, as sometimes I get bad requests errors and overall I spend tokens.

I tried several times with no luck.

The function I am using is this:

async function getBestText(input){
  const response = await openai.createCompletion({
    model: "text-davinci-003",
    prompt: ""+input+"",
    temperature: 0.3,
    max_tokens: 3500,
    top_p: 1.0,
    frequency_penalty: 0.0,
    presence_penalty: 0.0,
    user: "user01"
  }).then((res) => {

    console.log(res);

    console.log(res.data.choices)

    console.log(res.data.choices[0].text)
  
  
  });
}

Unfortunately not. Each request is unique and has no knowledge of what it has done before

The only way is to include the previous answer as part of the next prompt if you need to refer to it in some way.

In the case of an outline, you could ask it to explain one point at a time instead of asking it to do it for everything at once

It will take multiple requests, but the individual requests should not go over the token limit as easily.

1 Like

Thank you. This is a big limitation then, as entering the text of the outline in the new prompt will mean being charged for it as well + it means having a chance to see an error 400 for reasons I have still to understand.

The error 400 is probably because the prompt you are sending is larger than 4096 tokens (about 3000 words)

When you send a request, you have to add up the tokens you use in the prompt part, AND add the tokens in the completion. You are billed for both parts and the completion will be limited if you go over 4096 tokens.

For this reason, its a good idea to keep the prompt to 2000 tokens or less (depending on your use-case) This gives GPT 2000 tokens to do the completion.

If you use ADA, Babbage, or Curie, the limit is 2048 instead of 4096

1 Like

it turns out “bad request” error was generated by how the newline were formatted when taking the previous response and including it into the new prompt. I solved it with this line:

res.data.choices[0].text = res.data.choices[0].text.replace(/(\r\n|\n|\r)/gm, “”);

before including res.data.choices[0].text (the outline) into the new prompt.

Thank you

1 Like

response = openai.Completion.create(
model=“text-davinci-003”,
prompt=“This is a test”,
max_tokens=5,
user=“user123456”
)