Chat Gpt 4 has lost context of my whole conversation

Chat Gpt 4 has lost my conversation context. Now giving strange responses disconnected from the context background of the conversation.

3 Likes

This could possibly happen in ChatGPT (you say ChatGPT which is the web interface, but selected the API category) because of a reason that others have seen when trying to recall past conversations: there is corruption in the conversation database that prevents message retrieval.

You can see if the conversation can be fully read and recovered by this technique:

  • Press the “share conversation” button on the left sidebar, the middle button. It copies a link to the clipboard.
  • Paste the link to a new browser tab. That should show a shareable complete copy of the conversation - or might be corrupted.
  • press the “continue this conversation” type button at the bottom. That will launch a new ChatGPT session and a new conversation based on the shared version.

Of course, you could just be experiencing the poor memory of the system in general. ChatGPT only gets some of the most recent conversation so that it can simply know what your current topic is; it can’t recall everything of a lengthy chat.

Welcome to the forum!

Sounds like you have a long conversation or one that has filled the token limit.

That is expected and is not a bug.

Will be removing the bug part of the category.

If you do feel it is a bug then provide a minimal working example for posting with the bug category.

I agree with Eric’s assesment,
You can use the following prompt to keep your context inside the context window every 4-8 thousand tokens:

write coherent independent summary entire conversation append "let's continue"

It’s not a perfect solution, but it’s better than nothing.

1 Like

It’s possible that a conversation reaches the token limit and thus the first sentences of the conversation are not available any longer for the model to consider in the replies.
But that also means that everything else is still inside the context window and should be considered.
When you experience a situation where the reply suddenly completely lacks all context of the previous discussion you are not in a good spot. I suggest to either hit the thumbs down button and check if the next reply does have the context, or rewrite the last prompt to specifically include the whole previous conversation.
This also means it’s not worth the time or the nerves to try and correct the model by reminding it off the context. The “bad” reply is now in your conversation history. And once inside the context window it will degrade the whole conversation going forward.

It is always better to create a stringent conversation that focuses on the topic you want to discuss and the interface offers the two options described above to manage the situation.

Either way, if you can’t get the context back it’s likely “game over” and you should start a new conversation. In this case you can limit the setback by going another reply back in the conversation history and rewriting your prompt to create a complete summary of the good parts of the conversation which you can in turn use to kick start the next conversation.

Hope this helps!

And then there is this brand new feature which may also help you, and me just as well:

How can we use custom instructions through the API??

It’s the system prompt, most likely.
Note that you named the topic “Chat GPT4 has lost…” , So that’s a completely different learning curve now. It’s a fun ride though. Enjoy!

1 Like

Yes this is really a problem and the other problem is the following “The message you submitted was too long, please reload the conversation and submit something shorter.” and it is just one document and only 31000 characters, i feel for a paid subscription that in too little by far

Yes, and at the same time these are restrictions that are completely normal and as expected. In this sense you can definitely join the other developers on this forum when learning to implement solutions.

im the village idiot when it comes to development and coding, from a end user perspective i can shed light and suggest solutions, i for one do not mind to pay a bit more for extra capacity, the technology itself is going according to me very well but can do better, dont know if my 2 cents will be worth much

Welcome to the developer forum!

31,000 characters is approximately 8000 words, GPT-4 has a context limit of 6000 words (8k tokens), your input text is to large. You could split the text into two blocks and work on each half or you could try the GPT-3.5-Turbo-16K model which would be able to process your entire document.

2 Likes

I’ve been using it for a while now, and while I’ve had some fruitful interactions, lately there seems to be an issue with context maintenance in both new and old chats.

In the initial stages, when I started interacting with ChatGTP, the system was adept at maintaining context throughout the conversation. This was true for both my free and paid accounts.

However, over the past few weeks, it’s become increasingly evident that ChatGTP struggles to remember the entirety of our chat. Most times, it appears as though the system only acknowledges my most recent input, causing confusion and misunderstandings. Is the length of my previous input causing this?

When considering the utility and efficiency of a service, especially one that is subscription-based, it is paramount that the service delivers what’s promised. I’ve been a paying customer for three months, and my initial experience was great. However, with the current inconsistency, it feels like the quality of service has been watered down.

This isn’t just about having casual conversations for me. My business, to some extent, relies on ChatGTP. Thus, ensuring a consistent user experience isn’t just a matter of convenience; it’s about professional reliability.

I sincerely hope that the team behind this technology takes this feedback seriously and works towards rectifying the issue.

1 Like

I have the same experience, and the only thing that helped a little bit was to get rid of custom instructions.

1 Like

I think this has gotten worse. I’m not sure it’s capable of remembering old instances in previous conversations anymore tbh. If anyone knows differently, please advise!

I ran into this with my own gpt i created to record all my vacation costs. My work around was to copy and paste all relevant chats from original session and then paste in new one. This allowed the new chat session to continue recording my vacation cost while remember previous costs recorded.i also edited new chat to prompt me if bandwidth reaches 75% and at that point it would create summary of the chat which i can paste in new chat

I just started having this issue… out of nowhere… I first noticed it in one of my conversations within a project (the new feature). So i tried having a completely separate conversation from scratch outside of the project and basically from the 2nd prompt onwards, it is starting to loose track of the context, so I dont think this has anything to do with the tokens. Could this be due to a bug?