The OpenAI chat can follow the same conversation for multiple prompts while keeping the same context.
How to do the same through API so any next prompt can refer to what happened before and all of that is remembered as a context similarly as in the OpenAI chat?
This board has a few good threads on how some have tried to get a memory in place, but I don’t believe anyone has yet got the exact functionality as the website. I believe part of the challenge is as conversations go on, so do the token size sent on through eventually hitting their threshold.
One way I am sort of getting around this with some of the code I’ve been dabbling in is by taking the last responses, and making that part of the prompt. Generally I can get about 4 levels deep before the AI seems to loose context.
Here is some javascript example:
var s = oJson.choices[0].text;
// Empty Response Handling
if (s == "") {
txtOutput.value += "AI: I'm sorry can you please ask me in another way?";
} else {
txtOutput.value += "AI: " + s.trim();
}
masterOutput += "\n" + txtOutput.value + "\n";
localStorage.setItem("masterOutput", masterOutput);
lastResponse = s;
}
}
In the “messages” field when calling the api you can pass the whole conversation instead of just the last message, that way it will read the context. Lmk if you need a code snippet as an example