Currently this is how im sending prompt. But looks like they counting token for the preset messages everytime i send request.
const response = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Who won the world series in 2020?" },
{
role: "assistant",
content: "The Los Angeles Dodgers won the World Series in 2020.",
},
{ role: "user", content: "Where was it played?" },
{ role: "user", content: "can you repeat the first question?" },
{role:"user", content:req. body.message}
],
});