Token limit at chat.openai.com doubled?

As you may know GPT4 token limit is 8K tokens, but have you known that the token limit for GPT4 at chat.openai.com is only 4K?

You may prove it yourself by using a token counter plugin like this one.
Conversation with the proof.

However, GPT4 with code interpreter has 8K token limit, you can also check it yourself. Here is the proof.

The question is: why openai do not use any advanced type of conversation memory and use only context window memory? What other conversation memory modes have you found useful, and how do you personally chat with gpt when you need it to remember many details about you and your case?

For example, I ask to summarize the conversation in YAML format when I see that our conversation is longer than it’s limits.

1 Like