Is openai finetune platform add additional preprocesing tokens while finetuning

Additional tokens may be added during training, which can cause a slight increase in token count.
What factors contribute to the increase in token count during fine-tuning?
Does OpenAI perform additional token processing, such as adding separator tokens between individual entries or adding End of Sequence tokens, which might explain this increase?

Hi there.

There are no additional tokens added during the fine-tuning process. The total number of trained tokens is determined by multiplying the number of tokens in your training file by the number of epochs. See also you my response to your other question.