Not memory usage, but recollections of prior conversations. In the chatgpt interface, we can create projects and the system has a backend-api call that invokes “memories”.
does anyone know if this will get into Beta? or be available? I’d love a modality interface for my realtime STS assistant to be able to access my gpt chats. Right now it’s like I’m talking to two different people, when they should share the “memory” of my conversations between them.
I know I can scrape this, but yeah, that’s not a solution.
I can’t imagine how would that work. Do you want all API consumers to access the memory of your personal chagpt chats? How could you control this access?
It may work if you are the only user of your API but this is a particularity of your use case, it was not designed for this scenario.
I don’t want third parties to access my gpt conversations any more than the docs in my google drive, but we can still give access to our personal assistant (assume you are authenticated to the assistant), so then I want for my assistant to have access to the full content of my “graph”, which includes conversations that I’ve had with GPT. I don’t see the gPT conversations as any different than docs or other files. It’s all information.
Another nonsensical response from the poster above.
You discuss passive cross-session semantically-driven retrieval of additional chat turns as “history”. That is how ChatGPT works with the extended memory, building a vector store on prior chat texts or sections.
You would have to make this yourself. Otherwise, OpenAI would have to know all your customers and their requests. Realtime API you mention also doesn’t make for easy per-turn additions of turn-based texts - it stores a server state itself. “Just like ChatGPT but more expensive” is not a competitive product or what developers are looking for.
The people that would know this is not a product in development don’t talk about products in development, until they do a zero-day dump on the API with new parameters.
bingo! thank you, your response - it makes sense to me. it’s what I kind of figured, but could envision an “API connector”, if you will, to allow chat contents to be surfaced as information that could be added back to API driven custom solutions. I don’t want to have to force 100% of the conversations through modalities implemented by the API, but I do suppose that’s what openai would want for now (cost, rate limits, etc). So in my model, server state acts as “short term” and my construct acts more as “long term”.
I’m not attempting true real time updates to long term - some latency is ok since I’m distilling things for meta data and associated memory tuples. Manual export and VS updates for now…