A question about the context. May I ask everyone

Due to the need to provide all previous chat records in the implementation context, this will result in significant token usage. So I would like to ask everyone, is there any way to implement context while reducing tokens?
It is best to use techniques to reduce the length of the context.
Or let’s talk about your opinions.

I’ve just described something that answers your question here:

Please have a look.

You can use those techniques to lower the token usage as well.

1 Like

You need to pass the context you think relevant to the model to be processed. There is no “old memory” within the GPT Models, so you need to send everything, or you can filter your context in some intelligent way. That filtering usually requires some other form of AI… so in the end you need to process your data and that takes tokens.

1 Like