In my web app I use chat completions in several different tools. When the user interacts with these tools I want to take the response and send it to a main assistant, which powers a chatbot, so the assistant understands the users history and response of other tools.
For example, if they are interacting with a map tool and they’re searching for addresses, I want my main agent to know that they are interacting with the map tool searching for these addresses.
Each user has an assistant and I have one main thread for the agent so I let openai manage that memory.
Can I just add system messages in the background and then when the user goes to interact with the agent again it will end up in the run?
Any other ideas?
Memory is complicated, mate.
I think you have a good idea with using the Assistant as an administrator and Completions as workers.
From there, my opinion is that you use your Administrative Assistant to make function calls to the Completion Workers. This should bring all results back to a single thread, and you can assign that thread to a user. The Thread will show all the results in run steps.
Be warned, said thread will have exponentially ballooning costs associated with it the more “memory,” or user context it has.
Otherwise, on the one hand, there’s no need to update the System Message when using an Assistant and single Thread — on the other hand, you could programmatically update the system message with key ideas over time. I think this is how ChatGPT’s memory feature works, more-or-less.