Choosing Between Embeddings and Fine-Tuning in OpenAI's API for Specific Character Dialogue Generation: A Case Study of Star Wars' Obi-Wan

In a hypothetical scenario, let’s say we are trying to use OpenAI’s model to generate dialogue that closely resembles that of Obi-Wan from Star Wars. Which approach should we focus on more - embeddings or fine-tuning? Alternatively, is it possible that neither is necessary and we could explore feasibility using the LangChain and few shot prompt instead?

I understand this is a challenging task, but I’m eager to discuss with anyone who is undertaking or considering similar initiatives.