Limited max token in chat/completions

how to train model gpt3.5 turbo with my long text. before i train gpt3.5 turbo always send content data before i send question. but now i have problem with maximum token if i still continue to use that. please help me for resolving my problem.

You can only fine-tune base models. There is no current mechanism by which you can fine-tune gpt-3.5-turbo or gpt-4.