Which output is correct? playground // pycharm


Exactly the same problem. Using playground consumes token 9142, and PYcharm statistics are 1013. Which one is the real token consumption?
py code:
logger.info(response.usage)
logger.info(response.choices[0].message)

use tiktoken GitHub - openai/tiktoken: tiktoken is a fast BPE tokeniser for use with OpenAI's models.

and make sure you’re actually sending the same request. If you use completions, use the </> show code button. Images probably make everything more complicated.

thank you very much!!
I’m a beginner in programming. I read GitHub’s introduction that tiktoken can not only improve transmission efficiency, but also save token consumption. Am I understanding correctly?

Tiktoken just tokenizes your prompt on your machine, allowing you to count tokens without sending it to the inference API. It’s the same as this: https://platform.openai.com/tokenizer, but this link is a little outdated and doesn’t contain the gpt-4o tokenizer.