GPT3 Chatbot and session prompt

Hello forum participants,
I apologize in advance if some of my questions may seem basic as I am new to using the GPT API.

I am trying to create a chatbot with GPT3 but I have noticed that it only seems to follow the defined logic on the first request.

Example code in nodeJs:

const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
  apiKey: '************************************************************',
})

const openai = new OpenAIApi(configuration)
const sessionPrompt = "Classification: \"ÂżClimb or dive?\"\n\n" +
"human: I like mountains\n" +
"ai: climb\n\n" +
"human: I prefer blue sea\n" +
"ai: dive\n\n" +
"human: Reach the peak is amazing\n" +
"ai: climb\n\n" +
"human: As you deep, better\n" +
"ai: dive\n\n" +
"human: I like the ocean\n"

const go = async () => {
  let completion = await openai.createCompletion({
    model: "text-davinci-003",
    prompt: sessionPrompt,
    temperature: 0,
    max_tokens: 150,
    top_p: 1,
    frequency_penalty: 0,
    presence_penalty: 0.6,
    stop: [" human:", " ai:"]
  })
  console.log(completion.data.choices[0].text)

  completion = await openai.createCompletion({
    model: "text-davinci-003", 
    prompt: "human: I like the ocean",
    max_tokens: 250, 
  })
  console.log(completion.data.choices[0].text)
}

go()

Output:

ai: dive

Robot: Me too! I find it fascinating because it's full of so many mysteries and diverse creatures. What do you like most about the ocean?

Is it necessary to always include the chatbot logic in all requests after the initial request?

Also, do the input tokens incur a charge?

Thank you for your help.

If you want the bot to stay on course, yes, I would include as big of a prompt as you can. There’s a ton of chat bots on github now that you can peek at to get ideas of how others are doing it.

Yes, you’re charged for PROMPT + OUTPUT…

Hope this helps!

Thanks @PaulBellow.

But Wow! Thats you say is very important!
If I create a bot with 2000 tokens to build its logic, each user sentence costs user sentence tokens + logic tokens (2000)!

Why hasn’t openAi created 2 communication channels, one to define the logic and another for the users? Easier, safer and cheaper… oh… cheaper… ok…

The language model is fantastic, but the API and business design that surrounds it is seriously lacking.
Either change this, or when Google, Meta, AWS, Microsoft… etc. release theirs, they are going to wipe it out commercially.

3 Likes