Persistent Issues with GPT Model: Loss of Contextual Understanding and Continuity in Conversations

I’ve been experiencing ongoing issues with my GPT model that have significantly impacted its functionality. The core problem seems to revolve around the model’s inability to retain and apply the learning it has previously acquired. More specifically, the GPT is struggling to maintain the thread of a conversation, especially in contexts where it previously performed well.

For example, in discussions related to anxiety management techniques, the GPT would typically analyze and respond appropriately based on the training I had provided. However, now, instead of continuing the conversation contextually, it responds as if initiating a new conversation, e.g., with a generic greeting like “Hi, how can I help you?” This pattern repeats itself, and the GPT fails to demonstrate any continuity in the conversation.

Additionally, there seems to be a significant decline in its ability to follow natural language chronology and responsiveness. It’s almost as if the GPT is exhibiting symptoms akin to early-onset dementia, providing responses that are entirely out of context.

To rule out any account-specific issues, I created a new CHATGPT account with a different email, only to encounter the same problem. Conversations with colleagues have revealed that they are experiencing similar issues.

Has anyone else noticed these problems? Any insights into potential causes or solutions would be greatly appreciated. I’m keen to understand whether this is a widespread issue and how we might address it.

Thank you for your time and assistance.

Best regards,
André

3 Likes

So, long-form prompting is something that seems easy and intuitive on the surface, but can actually be quite difficult to manage, and is a rather niche use case.

As more context is fed into a conversation, it struggles to identify what to pay attention to, and that’s when things start to break down. It’s not quite “learning” in a traditional sense, more that it’s looking at everything in a context window to figure out what it’s supposed to say next. Continuity is something that has to be managed by the user. This has been the case for as long as these systems have been published.

there seems to be a significant decline in its ability to follow natural language chronology and responsiveness.

In truth it has no sense of chronology at all. It is doing its best to match patterns based on the data, but in terms of following some kind of “chronology”, this is something that the user has to guide the GPT to do.

I think of it like an iterative process. Bouncing back and forth, reminding the GPT of what it needs to pay attention to, and providing consistent reinforcement.

2 Likes

When I begin a conversation that I know may require several back and forth prompts/responses, I employ a “save point” trick whenever enough new information or details have been fleshed out and before the model starts to lose context and go off the rails.

I have found that while the ChatGPT interface struggles to maintain context throughout a long conversation, it can easily refer back to a Python block earlier in the conversation to regain focus. So, I will simply ask ChatGPT to summarize the key elements of the conversation (topic/context/terminology used/etc.) up to this point and place it in a Python code block.

From there, I update the code block after new information requires it and refer the model back to it if it starts to lose focus or hallucinate.

Hope this helps!

5 Likes

I have come to this forum for this very reason. OK, at least it’s not just me.

There seems to be no contextual memory and every interaction is treated as a “first contact”, which includes a welcome message from the GPT.

2 Likes

I have noticed the same problem. My custom GPT seems to forget the conversation context completely after only 7 to 10 messages.

This has never happened before and it’s not about the usual limits of the context window. The GPT has managed long contexts just fine for the past 2 months. Something is wrong.

2 Likes

Same problem here. It seems that the performance suddenly decline sharply.

4 Likes

I have a similar issue, and wonder if it’s because of Knowledge documents.

I rewrote the instruction prompt multiple times to make my custom GPT focus on the conversation, but it will always end up like this:

  • Me: Generate 4 sample questions
  • GPT: Here are 4 questions: …
  • Me: Generate a reply to the last one // to the last question you just wrote in our conversation
  • GPT: Recreates a whole new question, and answers it.

When probing GPT as for why he didn’t follow the conversation, I get a sense that between each message, he goes through the Knowledge documents I have uploaded and forgets the actual conversation thread completely.

2 Likes

This has happened to me as well. I have custom GPTs I have built to work with specific clients, and the GPT sometimes forgets the context, or spits out things I have not asked (like what is in the upload), or answers a question completely out of context. I have tried opening a new prompt window and reasking, which has helped, but I have also reloaded documents and instructions, trying to tweak them to be both as specific and as brief as possible. These both help, but I do not understand this incongruous behavior, either.

1 Like

that is a really good idea that I am going to try, thank you.

1 Like

No problem, glad to help. Remember, the code interpreter times out after inactivity. To retain information for future use, request a .py file for download, ensuring quick access in subsequent sessions.

1 Like

Hi
I agree. There really seems to a dramatic decline in the ability of custom GPTs I have created to perform its tasks. This happens even if you prompt the GPT to first summarize the steps it will take to perform the task at hand. It does the summarization correctly but as soon as you instructs it to perform the tasks it goes completely off track. This is really frustrating.

Yeah, having to provide constant reminders is annoying. You end up with a repeated truncation issue where you have to keep reminding it of the last thing it forgot, and, in doing so, it forgets another thing, which you have to remind it about, consuming 2 messages for every 1 message it forgets (if tokenized by message, not total text length).

1 Like