How do you format QnA type on gpt3 fine tuning JSONL?

I want to come up with a QnA type format in GPT3 fine tuning design. However, Im having a hard time solving it. It seems that prompt design is very different against fine tuning design. Can somebody has an example on how you format your gpt3 fine tuning JSONL format? Here’s my current format:

{"prompt":"Human: Can you help with a technical problem\nAI:","completion":"I understand that you need support. Would you like self service or do you need to report an issue? Please choose a button or just type in your answer."}

Here’s how I call it on the completion API:

curl https://api.openai.com/v1/completions \
  -H "Authorization: Bearer <API_TOKEN>" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Human: Can you help with a technical problem\nAI:": "<FINE_TUNE_MODEL>"}'

The response of this CURL call does not give me the completion that I defined. Can somebody encounter this problem?

What do you mean QnA? What is the actual domain?

Hi @daveshapautomator , I mean a simple question and answer.

I have the similar issue with my fine-tuned model.

Before it is model=text-davinci-003
After it is model=my-fine-tuned-model

Q&A Should be accomplished using embeddings, not fine-tuning. It’s much more efficient and easier to use.

how many samples are in your JSONL?

i have 33 samples in my JSONL file.

embeddings is not for web crawling stuff? any thanks for the direction i will check the embeddings too.

don’t even bother fine-tuning with so few samples.

and like others have said, fine-tuning is not ideal for “adding knowledge”

what is then ? i have to train gpt 3 on a specific data … so does fine tuning works ?

this has been asked and discussed a million times, use the search.
short answer: fine-tuning works fine, just not for that use case. For using a knowledge base you should use embeddings