I’m working on a project where a user has a conversation (Thread A), and then another conversation about that conversation (Thread B). Thread A and Thread B should be separate threads, because they’re different conversations and displayed separately. But when Thread B runs in an Assistant, it needs Thread A as additional context.
Is there any easy way to accomplish this?
(I’ve thought to generate Thread A into a file that is passed as context for Thread B conversations. I’m wondering if there’s a more direct way.)
Hello, I think that the most straightforward and yet legit solution is by using a RAG (Retrieval Augmented Generation) process. There are popular libraries/frameworks for this, like Lllama_index (LlamaIndex 🦙 0.9.40).
But I believe that the so called “memory” feature which OpenAI has launched for some selected plus users, as it is said in this forum, Your gpt can now learn from your chat , will cover that need official by the platform, when and if released of course.