How do you maintain historical context in repeat API calls?

How I would do it is to create a dataset and add each prompt and output on top of each other:

Session1:
Prompt1: text
GPT generates output

Session2:
Prompt1: text
Output1: text
Prompt2: text
GPT generates output

and so on…However you will need to keep the amount of tokens in consideration. You can’t keep adding unlimited prompts and outputs. So then you could create some code to forget the oldest lines, once you’re almost out of tokens.

That’s how I would do it, but maybe there are better methods.

2 Likes