Classification of the sentences into the categories provided

I have a dataframe with column Question and Comments with 1 lakh rows. Also i have around 350 categories into which the comments are to be classified. The prompt takes around 3300 tokens.
I tried with sending one Question and its Comment in the prompt to the API. But for 1 lakh rows I will have to send 3300 tokens every time to the API. I tried with small dataset but the cost and latency is too high.
Also tried with sending multiple comments to the prompt but the output is not in structured format everytime.
I need some help for this problem.
I am using GPT3.5 turbo azure openai API.

You should learn what embeddings are, and how to use them with a Vector DB, for doing cosine similarity search. It might work in this scenario, because theoretically this sounds like a semantic nearness problem, but I might be wrong.