What is the OpenAI algorithm to calculate tokens?

Yes, it does, these are algorithms, per the document. They are algorithms to estimate tokens count:

  • 1 token ~= 4 chars in English
  • 1 token ~= ¾ words
  • 100 tokens ~= 75 words

In the method I posted above (to help you @polterguy) I only used two criteria:

  • 1 token ~= 4 chars in English
  • 1 token ~= ¾ words

You can modify as you like.

HTH

Note:

“An algorithm is a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.”

These are OpenAI GPT “rules” to estimate token count, per the doc provided above, so they are a type of algorithm and is what I use in my code base I coded during breakfast to assist you.

  • 1 token ~= 4 chars in English
  • 1 token ~= ¾ words
1 Like

There is a much more accurate way using the gpt2 tokenizer library. I’ll find the link. Then you will get close to the same as openai use. (No need to estimate)

1 Like

Also note, if you take the examples in the OpenAI docs:

  • Wayne Gretzky’s quote “You miss 100% of the shots you don’t take” contains 11 tokens.

So, let:

 text="You miss 100% of the shots you don't take"

estimate_tokens(text,"average")
=> 11

estimate_tokens(text,"min")
=> 10

estimate_tokens(text,"max")
=> 12

So, one could think, for this very small sample of text, that “average” is closer to how OpenAI get a final count.

However, of course, more trials and tests are needed; but I think for many developers, “average” is a good guess; but if you want to be conservative, use “max”.

Hope this helps.

Hopefully, the link you plan to provide has code which developers can download / copy and use in their projects.

If so, I will add it and test.

Last time I checked, the GPT2 tokenizer was not “perfectly accurate” either; but that was using a third-party on-line form, which I cannot include in code as a developer.

1 Like

I also reverse engineered the tokenizer on the playground

It is written in JavaScript. I converted it to c#

2 Likes

Why not post it and share ?

That would be helpful, I think.

Sharing code in a developer community helps everyone in the community.

:slight_smile:

Thank you, it surely helps, but I need exact :confused:

But thank you, you’ve been a champ … :slight_smile:

THAT would be SUPER interesting, as in getting the C# version - Assuming it’s 100% “perfect” in its calculations :slight_smile:

I’m happy to release the c# version

There are differences that I have not nailed down. I am using the exact same token libraries .another member gave me a suggestion I never got time to follow through with

I’ll extract it from my main app and package it up for others to use soon. (It’s embedded in my larger app at the moment)

Thank you!

We need a new OpenAI community rule, which I plan to propose to @logankilpatrick that all Q&A should take place in the community and not via email or private, direct messages; and we need to encourage everyone to post code because this is a developer community and the language of developers is code.

This type of “Q and A” tech forum rule that all Q & A should occur in the public forums is normal for tech communities because the purpose of having a public community is to share code and work on developer solutions in public to create a knowledge base for current and future users.

1 Like

I don’t think you will find any tokenizer which is “100%” perfect at counting tokens all the time, @polterguy ; writing a lot of code and I find the token approximates work fine especially because you cannot control exactly how the GPT model will reply and exactly how many tokens they will use in a reply.

Of course, I would love to see code which does not use an API to estimate GPT-2 token count, because I prefer to estimate tokens without making another network API call because the more network API calls made, the more the chance for errors, costs and delays.

According to the GPT-3 docs I have read, using a GPT-2 tokenizer is also an approximation for GPT-3 (not exact) and the most recommended way to get a GPT-3 token count is to submit an API call to a GPT-3 endpoint.

Hence, what is why OpenAI offers the “rules to estimate” token count, I think at least.

HTH

Here is the correct link

Previous post deleted to avoid confusion

I couldn’t agree more. I’ve got 10K commits to GitHub the last 5 years or so, and everything we’re doing in the AI space is 100% Open Sauce ==> GitHub - polterguy/magic: Generate a web app in seconds with Low-Code and AI

So I totally agree!

OpenAI tokenizer

Respect!

I am very biased toward coders and wish more people here would “speak” through code and working examples!

After all, this is supposed to be a community of developers / coders …

Say we put a sample /etc/hosts file into the tokenizer.

127.0.0.1	localhost
127.0.1.1	ha-laptop

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

It says this would parse to 75 tokens. The sample above has 189 characters meaning if we take their estimate of token = char / 4 we would get 47 tokens. If we look at the words we find it has 22 words so 22 / 0.75 = 29 tokens.

Can anyone please help explain why this is?

Special characters, such as “<>,.-” often takes one token. So more you have special characters the more you have tokens compared to text with alphabets.

Well, you are picking a “corner-case sample to quibble about”, @smahm

You can see that in your example, the words are not typical works you find in text and so you are “picking” on a special corner-case.

Not sure what is your point. If you need an accurate token count, you should use the Python TikToken library and get the exact number of tokens.

You are using a generalized rough guess method and applying it to a corner-case and then commenting on the lack of accuracy. Not sure why, to be honest.

Here is a “preferred method” to get tokens ('chat completionAPI example usingturbo`):

import tiktoken
import sys

def tik(words):
    encoding = tiktoken.get_encoding("cl100k_base")
    #encoding = tiktoken.encoding_for_model("gpt-3.5-turbo")
    tokens = encoding.encode(words)
    return tokens

tmp = str(sys.argv[1])
# output token count 
print(len(tik(tmp)))
1 Like

Yeah apologies ill edit my post to be less quibbly. Thank you for the example, I will study the tiktoken library.