Getting different count of tokens for the same prompt and response

I am using asssitant API using gpt4 turbo. For the same prompt and same output, I am getting different token count (both prompt and total tokens).
4 different counts:
Run1: Prompt tokens: 85561 Total tokens: 85679
Run2: Prompt tokens: 40910 Total tokens: 41143
Run3: Prompt tokens: 6792 Total tokens: 7033
Run4: Prompt tokens: 96149. Total tokens: 96259


Are you using the Assistants API or just GPT-4?

Assistants is powered by a non-deterministic AI, that allows no parameters to set typical controls such as temperature or top_p, so one can expect each AI generation to be different.

This can be the difference between the AI emitting a function call token or starting a response to the user. Writing python that produces an error or succeeds.

Since the Assistant has agency to perform multi-step function-calling until it is satisfied, and there are other elements out of your control such as document injection, it is understandable that there may be a different count of tokens consumed between runs.

You can see what was actually performed by analyzing run steps, which are available through API calls. This can help you determine and refine the clarity of a task so a goal is reached efficiently.


Hello there,

Thanks for your reply.

I am using Assistants API.



The assistants API will make judgement calls on what data should be retrieved, it is still in beta and will often make different decisions on the the same input data.

The system is not yet ready for production environments and should be considered as such, there are up and coming updates to this system that will hopefully improve things.

@Foxabilo :
Can you share with us the timeline for these “up and coming updates”?
When could we expect a production-ready version of the Assistants API?


I wish I knew, unfortunately all I do know is that it is being worked on and updates should be this year, I would imagine it will be in the first half of this year, but beyond that I can’t say.