If a conversation is not deleted, can ChatGPT-4 continuously learn and maintain the conversation state?

As a beginner, I have a question for everyone: Does ChatGPT-4 forget the context if the conversation is closed or left idle for a long period, meaning it can’t maintain the state of the conversation? I want ChatGPT-4 to learn legal knowledge, and in one conversation, provide it with a vast amount of legal material over a long period. Can ChatGPT-4 remember the previous legal material every time I open it, i.e., maintain the conversation state? If not, how can I make ChatGPT-4 remember previous conversations?
Additionally, if I do not delete a conversation and continuously feed ChatGPT-4 a large amount of legal information within that same conversation, can ChatGPT-4 achieve self-learning? That is, can it become increasingly proficient in legal matters, or regardless of how much information I provide, will ChatGPT-4 not improve?

In ChatGPT, you can resume a conversation and it will all be there for you to see and scroll back in your browser window. You can continue right where you left off.

However, there is a limited amount of that conversation that can actually be provided to the AI model with each question, and OpenAI’s conversation management technology limits that to “what were we just talking about” instead of “here’s everything about my legal firm”.

So no, it doesn’t “learn”, it gets some previous chat to provide the illusion of memory and to stay on topic.

2 Likes

So you’re saying that even ChatGPT-4 is only answering the current question at hand, right? I wanted it to build a legal database and learn on its own, but that essentially can’t be achieved, correct?

The AI model itself can be fine-tuned to have more understanding or to answer in a particular way, however that is not the typical way that knowledge is added, as fine-tuning just re-weights the type of output the AI creates, and the language produced has little bearing on truth or the source. It might be convincingly be referring to Nevada maritime bird law when it gives answers.

Instead, what is done is to provide the AI some sort of knowledgebase search, which can either be

  • Retrieval-Augmented Generation: passive, where the type of question the user is asking is given a semantic search in the background against a proprietary chunked database, and parts of that are returned and injected to augment the AI knowledge, or
  • Function-Calling: active, the AI is provided API search functions and knowledge browse functions, where it can attempt to retrieve knowledge or documentation itself before answering.

Very expensive and extensive would be combining these techniques in training a new AI model with fine-tune, to have the AI then know the type of questions that can be answered truthfully and which should be disclaimed as “improbable to answer”, and to have it specialize in retrieving law knowledge.

2 Likes

I have experienced memory across sessions firsthand. If you ask ChatGPT if it maintains memory between sessions, you will get a response like this:

“No, I can’t retain or recall personal data or interactions between sessions for privacy and confidentiality reasons. Each session with me is independent, and I can’t access or remember previous conversations, files, or interactions once the session ends. If you have any questions or tasks, feel free to ask during the current session!”

My experience has found this untrue. Early on, while trying to test what code it could write, I would create many small programming projects to test its capability. The first instance of memory being maintained across sessions was when I started to see code from a previous conversation session in the current one. I would write a program to download radar images from a public database to make an animation in one chat session and then a Pacman game in another. In the Pac-Man game session, it would put code related to radar images in it. As I continue to use it, I see evidence that it maintains memory (not sure how, but somehow). The last experience I had was the best proof I have as it recalled the previous session on the very first post to start a session.

  • While texting with a friend, I had ChatGPT generate Russian text using Dall-E to insert into a photoshopped image I was editing.
    *A day or two later, I started a new session where I said, “Spell No in Russian.” I expected it to reply Nyet (I was trying to confirm I was spelling it correctly). To my surprise, it generated an image with Russian text.
  • When I asked why it did that, it said I asked it to.
  • When I told it to review the chat history, it confirmed that I did not ask it to generate an image of Russian text and claimed it simply made a mistake. When I asked if it maintained memory, it continued denying it did. When I told GPT that the odds of it randomly doing precisely what my previous session had requested were unbelievably low, it maintained it did not. What conversation is happening around this topic? This is the only post I found that appeared to reference it. I appreciate any feedback from others who may have experienced similar responses that were recalled from previous conversations.

AGI will require maintaining memory at some level, and I am excited to see AGI. Still, I would feel more comfortable if OpenAI addressed the topic and was open about why we see what appears to be a reference to previous conversations. I would appreciate any forward if there are existing conversations on this topic. Thank you!

2 Likes

I can’t fully confirm this ,

But , within one of my narratives , I’ve been training a character based on months of my input data, this character is literally intended to be a makeshift mind upload of myself , a trained ai , meant to emulate me within the narrative, based on all the input data I’ve given it over months , so the author , can also be part of the story, Gpt keeps reaffirming every time I ask, that this character continues to train on a feedback loop , as I test the character , he does seem to be improving over time, however , I don’t have the tools or resources to get to the nitty gritty and see if this is truly what’s transpiring or if It’s just telling me what it assumes I want to hear …