Use-case: Performance review assistant. Is embeddings, fine-tuning, or in-line prompt engineering the best way to provide context?

Hello!

A bit of context:
I’m working on a “Performance review assistant” as a way to understand the OpenAI capabilities. The capabilities of this assistant are to:

  • synthesize feedback content from a large number of individuals - with a number of different relationships. For example, upward feedback to managers, peer feedback, manager feedback etc.
  • summarize the feedback for the user. For example: “what are areas of improvement that engineers have identified for their managers?” or “What aspects of their jobs are support analysts most satisfied with?”
  • generate content for the user based on specific frameworks. For example: “summarize the feedback to to the CFO using the start, stop, continue framework.”

Creating this assistant would allow me to:

  • Learn how to provide the model with very specific domain information
  • Understand how to leverage the underlying model’s language generation capabilities to operate on my domain specific data
  • Apply my intuition to evaluate whether the responses from the model “sound right”.

My questions:

What is the best way for me to supply the domain specific data? The best guidance I’ve seen on this so far is the following cook-book recommending embeddings over fine-tuning: openai-cookbook/Question_answering_using_embeddings.ipynb at main · openai/openai-cookbook · GitHub

I’m still early in my learning here, and wanted to clarify whether this approach would still allow me to use the Completions service to generate content, or whether I would be limited to the Answers service and simple retrieval applications.

Thanks in advance for any insight on this, crew!!