Best way to add info - Assistant / Threads

We are using OpenAI in StarNet Assist messaging system, and for each user we have a different thread, with daily messages.
I wonder what is the best way (we have used different till now) to give old messages to the thread, so AI can reply to each user in a more appropriate way.
I have a dedicated prompt for each user, in each thread, but I noted that if I don’t give old messages, the replies are not personalized.

So the question is: what is the best way - in your experience - to give old messages for the same user to the thread, so I have more appropriate replies?

  1. I store old messages in JSON
  2. I could give access directly to db (but I don’t think AI can have access and read it)
  3. I don’t want to make the prompt bigger and bigger (and expensive). Imagine 20, 50, 100 messages in a day

Hope this is clear.
Message system is actually for free and for elder people, based on whatsapp

Hi,

There is no way around passing old messages to the API each time, AI models are “stateless” they have no memory of past events.

You can try only passing the last 5 or 10 messages as a way to manage the token count, or you could try asking the model to summarise the chat so far and store that, that would, of course, require an extra API call.

There is no free lunch with current generation models, there is no way to get behind the curtain and secretly pass the model extra information that does not incur a cost, unfortunately.

1 Like

Even if I also tried to pass last xxx messages, even if they have a cost in token, I think you had a very good idea: I can try to summarise a group (example a group of daily messages, I can do it with a night chron of DB), and keep them ready for the next message. This could be a good way not only to summarise, but also to keep in the summary just the relevant information about the use.

I mean, I can trash all messages about weather, but keep and summarise evertyhing related to the user’s sentiment, feelings, or so.

I will try, so I can see how big is (in media) the summary generated, in token.

We use Filemaker, MongoDB and a little of MySQL. I’ll see how to generate the best light summary with a night OpenAI API call.
Thanks for the suggestion.

2 Likes