Token usage when using openai.chat.completions.create stream: true

Hello,

Similar questions may be asked previously, but I did not find an answer for our case.

We have an application using ReactJS and NodeJS. We send GPT-3.5 and GPT-4 requests to OpenAI by openai.chat.completions.create and stream: true. We would like to count the tokens of prompts and complements as correct as possible to give a precise billing.

Does anyone know how to get or calculate this token usage? Which libraries can be used?

Thank you

Ensure you use the cl100k-base token encoder for chat models.

The OpenAI official/authoritative tiktoken is in Python.

Thank you.

It seems that js-tiktoken points also to github.com/dqbd/tiktoken, are @dqbd/tiktoken and js-tiktoken the same project?

Looks like one just has a few different commits, and both have dependencies. They are from the same developer, also here:

So I should try first js-tiktoken, right?

Personally, if you have the server that can run it, I’d set up an API for tiktoken in python, and just make authenticated or internal calls to it.

1 Like

Do you mean setting up a server which hosts openai/tiktoken in Python, and provides an API which can be requested by my backend?

Yes, that to go along with the server that sends client software requests to OpenAI, so you don’t do silly things like putting API keys in your app to be stolen and have your account emptied.