Per-chat memory configuration

OpenAI recently expanded the memory feature to allow ChatGPT to recall past conversations. While this is an immensely useful feature, it also comes with some potential drawbacks.

In my experience when asking certain programming questions, there are many instances where starting a brand-new chat leads to a higher quality conversation. This could be from the model providing incorrect or inaccurate information, to simply wanting a fresh context in a conversation.

There may be instances where we do not want the model to refer to previous conversations. However, according to the article, the only way to prevent this is by disabling memory, deleting past conversations, or archiving. Disabling memory would be annoying if we want to keep the personalized memory enabled. Deleting chats isn’t ideal if we want to keep the conversation history, especially to refer back to it or continue it at a later time. Archiving isn’t ideal if we wish to either continue the conversation or easily refer back to it without having to navigate through extra menus. Not to mention how they need to be unarchived and then archived again if one wishes to continue them. A temporary chat disables memory, but this disables the entirety of memory, not just previous-chat memory. Temporary chats also do not persist in the chat list and sometimes end up being deleted on openai servers while the chat is still being used.

There appears to be no ideal solution that covers the use-case I mentioned. Therefore, I request that we be given a context menu per individual chat that allows us to toggle whether or not we want the model to refer to previous chats during the conversation. I feel like this is a simple solution that gives users flexibility.


An example from my recent interactions: I had an error in my code and I asked the model for help debugging it. The conversation wasn’t all that productive, and ChatGPT was providing me incorrect information. We ended up delving deeper into specific related topics in this conversation. This fortunately lead to solving the issue I had, but I ended up with a new error. The conversation at this point had multiple topics and a bunch of wrong information mixed in. While I could have gone up a few messages and made an edit to make a new branch in the conversation, I decided that to solve this new issue I will likely get better results starting a new conversation. Given that this was a new error, it was possible that this time around the model may avoid providing incorrect information, which was what happened. I now have two separate but similarly related conversations, and I can have an easy time continuing the topics discussed in both.

While it hasn’t happened yet (as far as I can tell at least?) with this new chat-history-memory feature, it raises the chances of such interactions not being as smooth anymore. I started a new chat to avoid using the context of the previous conversation. Now there’s a possibility for the model to end up referring to the previous conversation I just had with it, which is what I want to avoid. This could potentially affect the quality of conversations for some users.

I think ultimately giving users flexibility over their individual conversations will lead to a much better user-experience. If anything, this feature can be exclusive to paying customers as an added incentive to go plus.