I have question about GPTs (customizable versions of GPT)
I know, that they retain memory about the talk. Suppose that I have a conversation about some topics, lets call them “concepts”. Can I assume, that it will remember all the concepts that we were talking about or is it better to wrap it programatically and save it in some persistent storage.
In other words - a question is if the knowledge is fully retained or due to embedding / compression it is not guaranteed that everything will be saved?
By definition it does remember anything that you say during the chat. But sometimes when you have a very extense conversation the AI start using his own knowledge instead.
To fix this you could change the temperature (if your using the API). Or in chat you could ask the model to create a summary of all the things that you talk about, maybe save it in a .txt file and use it as external storage of the conversation
In my experience:
a) the entire chat history for each chat is saved (at least since I started using ChatGPT a year ago)
b) each chat new chat with the GPT starts fresh. It does NOT remember anything from previous chats
c) If you have an extra long chat thread that covers multiple topics (or concepts) the context from the much earlier messages is lost pretty quickly (within a few messages, but this appears to be getting better over time where it can recall from much farther back in the chat session).
The ability to quickly recall information from previous chat threads or across chat threads is something I have been thinking about but I don’t think there is a solution yet.
I’m fairly new to chatgpt, in my limited time using the web browser chatgpt interface and the openai api, the AI in most models soon forgets what you tell it. For example if i tell it my name is Cristian and I’m 33 from Australia, I then converse with it for a short time on other topics, then ask “What do you know about me?”, it will almost always say words to the effect of “nothing”. This created a problem for me as i needed it to remember what I told it for a project i’m doing.
I experimented with feeding it with information from our previous conversations in order to constantly remind it of what it has been told and this worked most of the time however I soon hit my token limits.
But i’ve found the using the Assistants api with gpt-4-1106-preview to have an excellent memory, afaik it already does the work in reminding the AI of previous info behind the scenes so no need to code it yourself. I don’t think its latency is a low as 3.5 turbo but the memory and quality of responses has impressed me.