I asked ChatGPT what it’s token size is, and it replies 4k. It says it’s latest data cut-off point is in April 2023, so it’s the new version.
I made some experiments and I gave it some 6k token text to remember. And then I had various conversations with it for about 20k tokens (combined input/output if that matters).
I then asked it to repeat back to my my original 6k token text, and it said it couldn’t. I asked if it could try again, and then it did.
The full text I gave it was repeatable (although it was too long for one message).
So I guess this means that the 128k tokens are there, but ChatGPT doesn’t quite know it? So it’s fighting back a bit on it.
I was wondering if this hinder its ability to reply, if it doesn’t believe it can look further back into the context of a conversation. If it was me and I “knew” that I could not access memories longer than a day old, why would I even try to?
Any insights to this?
ChatGPT uses whatever model ChatGPT uses.
GPT-4 in ChatGPT is the full-release standard 8k model, otherwise, the promise of ChatGPT Enterprise getting an enhanced 32k context would be rather shallow.
OpenAI can also be testing models on select users, operating within the same prescribed limits.
OpenAI can also use conversation management techniques that allow retrieval of old chat data, while aggressively pruning that chat which is not relevant.
The AI won’t be able to answer about itself regardless.
I listened again to the OpenAI conference and at 19:15 Sam Altman says:
A small one, ChatGPT now uses GPT-4 Turbo, with all the latest improvements, including the latest knowledge cutoff, which will continue to update. That’s all live today.
Which, to my mind, means that ChatGPT should use 128k context. However, I did some similar experiments to yours, and it looks like my ChatGPT only uses 4k context.
So then the 128k context is only for API-calls, and only if you are at some kind of rank by paying lots of money?
It appears that every ChatGPT Plus user has access to ChatGPT Turbo with a 128k token context, but only via API or ChatGPT Playground.
I checked it a little more closely, and it turns out that the matter is not so simple. We are dealing with different tiers here, and the API is limited to 4096 tokens max output.
- Tier 0: 10,000 TPM
- Tier 1 ($5 paid): 20,000 TPM
- Tier 2 ($50 paid and 7+ days since first successful payment): 40,000 TPM
- Tier 3 ($100 paid and 7+ days since first successful payment): 80,000 TPM
- Tier 4 ($250 paid and 14+ days since first successful payment): 300,000 TPM
- Tier 5 ($1,000 paid and 30+ days since first successful payment): 300,000 TPM
Apparently to be able to use the full window of 128k tokens (max GPT4 Turbo) you need to be Tier 4+.
So it looks like 128k will be available in a very limited way, for some users, it seems that Sam Altman forgot to mention it during the conference.
After a closer look, it seems that the update is not at all as revolutionary as it looked at first.
This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.