Zero Shot Learning on GPT 4 vs. Fine Tuning 3.5 for chat?

Is it better to Fine Tune 3.5 or do Zero Shot learning and embeddings?

This is for a customer service chatBot.

You cannot (at least without direct assistance from OpenAI) fine-tune a GPT-3.5 or GPT-4 model.

Embeddings are best for a chatbot based on your content.

For your case, a few shot learning + embeddings might be the best case scenario.

If you have enough data, I would even suggest finetuning a davinci model and using that with your purposes along with a few shot learning. This would allow the model to be adept at returning responses to your bot specific scenarios

Best video on the subject I have found so far: OpenAI Q&A: Finetuning GPT-3 vs Semantic Search - which to use, when, and why? - YouTube