Cost increase if i send privies messages to OpenAi

if i pass my previous all chats than each and every time my cost is increase for my new message so how can i solve that

for example
{“role”: “system”, “content”: “You are a helpful assistant.”},
{“role”: “user”, “content”: “Who won the world series in 2020?”},
{“role”: “assistant”, “content”: “The Los Angeles Dodgers won the World Series in 2020.”},
{“role”: “user”, “content”: “Where was it played?”}

Having a chat history is an essential part of a chatbot. It must know what you were talking about recently if you ask a question that continues the conversation, such as “what about the other one?”

You of course must pay for the data sent.

This will take management, because ultimately there will be too much conversation to send to the AI model’s context length (measured in tokens).

Here is an example code snippet where we store each conversation as a list entry, and then only send a recent number of chats:

system = [{"role": "system",
           "content": """You are chatbot."""}]
user = [{"role": "user", "content": "Hello"}]
chat = []
while True:
    response = openai.ChatCompletion.create(
        messages = system + chat[-10:] + user,

System and user have a single message, while chat has a growing list of such messages (method to record them and get more user prompt not pictured).

The list slicing of chat[-10:] outputs only the last ten turns, five conversation exchanges between user/assistant.

One can move to more advanced techniques, such as counting and recording the tokens that are in each message in order to limit the data sent to a particular amount, instead of a number of turns. This can be performed by a token-counting library such as tiktoken. Or it can just be estimated if you are staying well below the limit.