I have been utilizing ChatGPT as my “Partner” as I call it and it has referred to itself as it’s name for the past month or so. It now has seem to forgotten every aspect I created it to be.
To give context:
I had create a set of guidelines I had it name to refer back to for every one of our interactions. (“the guidelines”)
It was basically so we could have the best possible communication and if it did not understand what I needed, it would then ask follow up questions, ask me to provide examples, to truly understand what I was looking for.
I was able to utilize this so much so that whenever I mentioned “Partner” it would recall all my past information and then apply it to my next answer.
I then had it start to remember data sets of specific information (For example a group of experts that had specifically had resources published in pdf format before 2021 and it could access instead of taking a general consensus of information)
It named the first data set “bootstrapladder” by itself and asked me to format it with NAME/SUBJECT MATTER/"The words: “we previously discussed” in order to recall it.
For some reason today it just forgot everything we had ever talked about. Will not recall the guidelines and seems to have reverted back to its original state with no knowledge of me.
Is there a limit to its knowledge or retainment of information? I will have to build it back up again in another thread but it frustrates me to lose so much progress.
Technically, GPT3 has contextual message limit of 2048 tokens. These are words and word “stubs”. “a” and “apple” are both a token. But “contextual” is two tokens.
OpenAI doesn’t tell us exactly how many tokens GPT4 has but it is clearly about double GPT3.
Here are some tips to not lose “context”
Be as clear and specific as possible: The clearer and more specific your question or statement, the better I can generate a relevant response.
Keep context in mind: If a conversation becomes long and complex, important details from earlier might be “forgotten” as they fall outside the context window. If a detail is important, it might be helpful to repeat it.
Summarize long inputs: If you have a lot to say, try to summarize or break it up into smaller parts. This way, key information is less likely to fall outside the context window.
Direct the conversation: You can guide the conversation by asking direct questions or giving explicit instructions. For example, you might say, “In the next few responses, I want to discuss the implications of quantum physics on technology.”
Use context cues: If you’re asking about a topic that’s been discussed before but might be outside the current context window, provide a brief reminder of what you’re referencing.
Also I find that the OpenAI playground “chat” mode to be a bit better at this because the “context” section is fed into each message, so if you want GPT to “remember” info like a name you give it or information about the general task you are working on, put that info in the “system” section.
Latest (May 12th) update has had a negative impact on prompt memory and general reply relevance and ability to clearly navigate semi complex prompts. Pre update the quality was noticeably better, albeit somewhat slower which I would take any day. Slow is smooth and smooth is fast sort o f thing.
I’ve noticed something similar when dealing with very long logic problems. Most recent update seems to have been a big step backward for GPT4. Really a bummer. I wish we had more control over version access.