How do we calculate the token usage while streaming responses which includes tool calls as well

I wanted to know if it’s possible to get the token usage for the tool call while streaming. In the sense that whenever we first send a request to a stream, it processes the prompt and then returns a tool call. The first stream gets interrupted, and then the second one continues after the tool call is made. With the “include_usage” parameter for the second stream, I can get the usage tokens (it’s given in the last chunk of the stream), but what about the first call that was made, which was used to identify if the current prompt needs a tool call or not?