I’m sending requests to an Azure OpenAI GPT-4-32k instance. I want to optimize the prompt token counts by removing dead spaces; maybe minify prompts, etc.
When I paste a raw JSON prompt into the OpenAI Tokenizer, I see ~17.5k tokens:
When I paste in a semi-processed JSON prompt ( replace(‘’, ‘<no-space’), I see ~5.5k tokens:
<can’t paste the image here due to new member restrictions>
Pretty substantial diff.
However, when I submit the raw JSON prompt to my Azure OpenAI resource, I see the token usage come back as ~5.5k.
So the question: Does Azure OpenAI optimize the prompt? How is the 17.5k token prompt being reduced to ~5.5k tokens?