However, GPT4 with code interpreter has 8K token limit, you can also check it yourself. Here is the proof.
The question is: why openai do not use any advanced type of conversation memory and use only context window memory? What other conversation memory modes have you found useful, and how do you personally chat with gpt when you need it to remember many details about you and your case?
For example, I ask to summarize the conversation in YAML format when I see that our conversation is longer than it’s limits.