Official tokenizer has huge count difference from OpenAI tokenizer

Are you using software not developed by you with a built-in token counting method that rejects you with a miscount before you even send to the API?

The tokenizer site where you can select the gpt-3.5-turbo model is correct.

Are you only considering the data? Also counted: system prompt message. Past conversation. Function specifications.