One vs two shot prompting for search integration

I see, so in this case I need to make a call to the embedding service first for the prompt? (so it may be 3 calls to OpenAI to make a reply)

graph LR
  A[Get embedding for prompt via Ada] --> B[Check distance<br>via cosine similarity]
  B -- Close --> C[Get command from GPT 3.5]
  B -- Not Close --> D[Simply reply]
  C --> F[Run command locally to get context]
  F --> E[Feed data plus context to GPT 3.5]
  D --> E
1 Like