Maintain the context within the 4096 max tokens

Hi everyone,
I am playing around the new functions and the introduction of chat markup language for the gpt-3.5-turbo model.
I saw here many discussion about the way to manage the message history in order to maintain a sort of context.

Anyway, as all you know, one limitations is that we have a max length. I tried to manage the previous answers with a local .pickle file (probably not the best solution). Can you help me to figure out what is the best solution to manage them, for example a local DB or something similar, and a way to summarize the previous answers (or a truncate function, idk).

I would be grateful if you link me some examples in codes or a link to git.

Hi @mrsalzan

See, for example:

:slight_smile:

1 Like