The response like this:
{
"choices": [
{
"delta": {
"content": "Hab"
},
"finish_reason": null,
"index": 0
}
],
"created": 1680676704,
"id": "chatcmpl-.....",
"model": "gpt-3.5-turbo-0301",
"object": "chat.completion.chunk"
}
{
"choices": [
{
"delta": {},
"finish_reason": "stop",
"index": 0
}
],
"created": 1680676704,
"id": "chatcmpl-.....",
"model": "gpt-3.5-turbo-0301",
"object": "chat.completion.chunk"
}
when I send an API call with stream=False, token_usage information was sent in response body. But when i set stream=True, there is nothing like this in the response. How can I calculate the amount of tokens used by myself.
2 Likes
sajo
2
I have the same request. Please provide token usage in stream mode like you provide “finish_reason” in the last chunk.
Genuinely trying to be helpful here…
use tiktoken GitHub - openai/tiktoken: tiktoken is a fast BPE tokeniser for use with OpenAI's models. this link .cumulate all tokens and calculate
1 Like
wolf
5
I use the API from Javascript not Python. So how can I calculate the request tokens.
By the way. In my oppinion the missing token-information it is an open issue.
wolf
7
Looks good, but when I include it into my Page
const { encode, decode, encodeChat } = GPTTokenizer_cl100k_base
const chatTokens = encodeChat(chat,'gpt-3.5-turbo')
I get the following error:
gpt-tokenizer:1 Uncaught TypeError: Cannot read properties of undefined (reading 'encodeChatGenerator')
at encodeChat (gpt-tokenizer:1:2093142)
at Object.javascriptFunction (home?session=8369786809309:842:20)
at da.doAction (desktop_all.min.js?v=22.2.4:24:5629)
so can you help me how to use it?
I’m sorry, I don’t have any personal experience using that package.
I’m a bit busy today, but there’s a chance I can take a look at it tomorrow if you’re still having issues.
1 Like
wb
9
Hi, did you find a way of using it in a normal website?
1 Like