There isn’t clear information on assistant token usage yet. Not on the documentation and not on this forum I’m afraid.
Still, it would be natural that the pre-set instructions will consume tokens per message, and that the more tools you use (dalle, retrieval, code interpreter) the longer these instructions are. You can try starting a new assistant with gpt-3.5 with all the tools and no additional instructions and ask it to give you the instructions is has verbatim. This can give you an estimate.
Alternatively, or additionally, you can use your usage page to monitor your billable tokens before and after your interaction, to evaluate the final cost and compare that with the estimations you’d have from the other method. This way you can get to a correlation from how many tokens inputed in instructions to how much you’re billed for 1st, 2nd, Nth message within a thread.
You can see other threads where colleagues are trying to get to the same predictability that you and I are also looking for, without success: