How do you get token count with tools input and tool_calls output when streaming

how do you get tokens count with gpt-3.5-turbo-1106 ‘tools’ param?
How to calculate the tokens of the tools parameter of the latest version of the gpt-3.5-turbo-1106 model, no corresponding calculation example can be found in the document

It would depend on the effect that the “tools” actually has, but could be explored.

If you are using function calls, I don’t see a reason why the language injected or the AI generation would be different, they would not retrain existing AI.

For future use, or that use that might be employed by GPTs or agents, “code interpreter” has been accompanied by its own prompting language in ChatGPT before and after # tools instead of a rigid specification, and that prompt language could be fluid. It emits python as a function and code without json encapsulation.

Thanks!Maybe there’s something wrong with my presentation? What I want to ask is how to calculate the token of function call mode, the old parameter ‘functions’ has been deprecated, the new parameter is ‘tools’, regardless of the actual function of the function call, I want to know how to calculate the token of their input and output in ‘stream’ mode, There seems to have been no official development documentation describing the computing scheme for this mode

There has been no documentation period on functions. You can probably put “namespace” and “@_j” into a forum search and find some of the first revelations.

You can obtain token counts by just turning the tool specification on and off, and getting the difference in input tokens reported. It will likely be the same as function specified the same way. There’s only rare cases where it will vary by a token, depending on your system prompt or a role’s “name” parameter you could vary and their joinable encoding sequences.

The overhead of output is relatively easy to estimate, put the function name and json return into a tokenizer, and see the difference in reported usage with the invocation and that output.

Then stream. Calculate with your figures.

1 Like

Thanks for the guidance, I will follow this idea and test its effect.