GPT-4 is not able to follow a continuous conversation

I can’t find the pattern, but at some point, chatGPT is unable to follow the conversation and starts assuming things on the fly.
It mostly happens when several messages are shared within the conversation. It’s not something that happens specifically when something else happens.
For example, it happens when several messeges are sent and in the last a request is made based on the previous ones.
Another example would be in large code messages.
It’s a waste of time (and. money) explaining every X messages the same over and over again.

The models can only process a finite amount of words at once. For example, the GPT-4 “8k” model can handle at most 5500 words, this is Input + Output.

So the trick of the developer is bring relevant past words into the conversation to make things seem continuous. Developers, including OpenAI developers, can only do so much, and sometimes things like this happen for large amounts of content.

The good news, is that the trend in the industry is to go higher and higher in the amount of words the model can process at once. Soon, it will feel virtually unlimited as the models expand their capabilities.

1 Like

Thanks for your time!
For example, if I send two messages with 2k words and gpt responded with 2k words each, and in the third message I want some code about those first two messages, gpt is not able to follow?

Thanks in advance

There is no set formula, and ChatGPT doesn’t strictly go by the capabilities of the underlying model.

ChatGPT has its own conversation management system, with undocumented operation, but it seems to both retrieve only past conversation of relevance (allowing it to remember you named it "Elvis 20 turns ago, while at the same time not being able to revert to prior generated code after two failed attempts), and to minimize that conversation to reduce computing costs.

1 Like

Oh. I see. So, it won’t work as a “code companion”. Can you point me in the direction to find a tool like that?
Thanks in advance!

When you use the API and your own software to interface with the AI model, you can choose your own conversation history management strategy. For example, one where the history is completely lossless, and you choose which conversation turns are no longer relevant and can be disabled or deleted. You can edit the code the AI already wrote directly. Permanent operations. etc.

You pay more for sending that awesome memory each question, though.

1 Like

So, I should pay for building my own prompt because the actual one does not work as intended and then pay more to use it?

You can just understand that ChatGPT doesn’t have a flawless memory of large messages you sent and large responses it wrote. And neither can the AI model behind it support an endless conversation.

Your programming “companion” when using ChatGPT? A text editor like notepad++, where you can compose a complete long input that has all the information required for formation of a new answer. Then paste.

Tried that. several times.
As I explained in my first message, there is no pattern. Sometimes it works correctly and follows the conversation, and sometimes its responses are completely outside the context, even there are assumptions on what the already-explained-in-previous-message-sent data should be.
Now I understand, as per your explanation, that there’s no way to handle that kind of fluid activity with chatGPT but developing my own prompt.

Thanks for your time!