CHATGPT-4 No Why No new Features?

Any official announcement regarding Why GPT-4 has the same character limit as GPT-3? Why doesn’t it support Images? as they show in demo video?

GPT-4 has 8k and 32k window sizes, which is 2x to 8x longer than GPT-3. From their FAQ on the Discord server, they mention that for images “This feature is not yet available to the public.” – so expect it sometime later.

The bot on discord can already interpret images?

“GPT-4 has 8k and 32k window sizes, which is 2x to 8x longer than GPT-3”

What it means with 8k and 32k window sizes? Tokens? Tokens are counted by words and punctuation?

ChatGPT 3 had 4096 tokens right? That’s for question and answer.

Several questions and answers it gave me had much less than 1000 tokens, and they still cut in the middle of the answer.

Tokens yes. They break words into tokens, and some words are represented by multiple tokens. A rough rule-of-thumb is that X tokens is 0.75*X English words.

Yes, ChatGPT through the API has 4096 tokens. And the other big one is davinci at 4k tokens as well.

It can stop short of your max tokens if it thinks it answered the prompt. There is no way to force it to answer up to your max tokens, but there are tricks you can do in the prompt to increase the length of the output. You can search the forums here for those tricks.

1 Like

apparently, the tokens are not just the answers… but questions + answers + context needed.

So in a conversation, it will use many more tokens than in a single question and answer.

It seems one tip is to EDIT your first question and summarize details from the rest of the conversation and then save and resubmit.

2 Likes

Normally, you are billed on total tokens (Input + Output). But in the API, the max_tokens parameter only applies to the output. Also, new with GPT-4, is the price of output tokens is 2x the price of input tokens. But in any case, you cannot exceed the max tokens allocated to the model, which is always defined as the Input + Output tokens. This is the 4k, 8k, 32k depending on the model.

1 Like

Yes, this is powerful and token-saving, and it’s basically what ChatGPT does behind the scenes, so do it in your API calls as well if you want to mimic this behavior.

Dumb question but how do you join that Discord?

Here is how you join. Click this link:

1 Like

I don’t use APIs however. I am talking about the usage of the chat mode.

It was recently revealed that ChatGPT is really GPT-4 with 8k tokens. I think it’s confusing, but here is the source of that comment:

1 Like

Chat GPT stil allows you to select the model. If you select the GPT 4 model it will have the 8k tokens. There is no “apparent” limit, nor it is charged if you use more.

It’s just that it can PROCESS at a single time, 4096 tokens in ChatGPT3 and 8k using the ChatGPT4 model.

@rogerpenna Agree it is confusing. The ChatGPT API is limited to 4k tokens. But Microsoft is also saying ChatGPT is really GPT-4 with presumably 8k tokens. So it looks like the API version really is the GPT-3.5 vintage, and to get “ChatGPT” you need to use the GPT-4 version with 8k tokens.

Clear as mud, right? :rofl:

Hi Curt,

I think you written the above formula in a bit of a confusing way (even thought I know for a fact you know better), because you say tokens ~= 0.75 words. Tokens can be thought of as pieces of words (one token is roughly 3/4 of a word), so there will always be more tokens than words.

I know what you meant to say, but you used the * (multiplication) symbol, where you then accidentally multiply 0.75 * number of words to get tokens.

In your “rough rule-of-thumb” above, if taken literally, there will always be more words than tokens.

So, instead of the above: tokens ~= 0.75 * words above, it should be like this: tokens ~= words / 0.75 OR words ~= tokens * 0.75.

Reference:

What are tokens?

Tokens can be thought of as pieces of words. Before the API processes the prompts, the input is broken down into tokens. These tokens are not cut up exactly where the words start or end - tokens can include trailing spaces and even sub-words. Here are some helpful rules of thumb for understanding tokens in terms of lengths:

  • 1 token ~= 4 chars in English

  • 1 token ~= ¾ words

  • 100 tokens ~= 75 words

HTH

:slight_smile:

@ruby_coder

How about:

Words ~ 0.75*Tokens. Where Words are in English.

I come from a math background, and the coefficient leads the variable.

Good news is that multiplication over the reals is commutative, so

Words ~ 0.75*Tokens = Tokens*0.75

But if you want to know the truth, I really like:

Words = 3*Tokens/4 … but I thought that was overkill :rofl:

2 Likes

Yes.

This also helps folks understand that the expected 32K tokens with gpt-4-32k that the approximate words are 24k (give or take a lot of course).

So, when someone tries to send a 24K word prompt (messages array), it will likely fail because the total token count is for both the messages array and the completion (response), not to mention a users max_tokens setting when they submit the chat completion.

I have been thinking for the past week to start a wiki page here (everyone can edit the page) so we can hopefully consolidate these facts which seem to confuse so many here.

:slight_smile:

1 Like

@ruby_coder

But personally, I like:

Screenshot 2023-03-16 at 8.02.31 PM

It’s actually better in that it errors slightly on the side of caution :sunglasses:

1 Like

Yeah and it looks more scientific, haha

:slight_smile:

1 Like

Overall I say that it’s important to know that in the documentation the tokens-words parity is an approximation and is not based on words, but more on character sequences.

Is true that character sequences are usually full words, but in some cases they are not.
For example “876” it’s one token and “878” are two tokens (“8” first token and “78” second token).