I’m not sure if I am using the right phrase “context window”, but I’ve seen it mentioned in interviews and from what I can understand it’s basically the amount of memory of previous chat messages before GPT forgets.
If you use it for a long time (4-5 hours) you’ll be able to start to sense when it is starting to forget things, or roughly how far back in the conversions it no longer remembers.
I find you can sort of manage this by continually remaining it of key information, so it doesn’t forget them as the conversation goes on.
Also, a side question, is the “context window” related to the tokens the model has? Would a 4k token model has a smaller “context window” when compared to the 32k model? Would a 1m token model be able to have an extremely good memory of a very long conversation?