Fine tuning doesn't bring relevant completions

I am new to fine-tuning and decided to try it out using simple examples. I uploaded 10 questions and answers similar to the following:

{"prompt":"What is PICT's goal?->","completion":" PICT's goal is to become the top company of Pakistan and it continues its journey towards achieving this objective.->END"}
{"prompt":"Has PICT received any awards for its performance?->","completion":" Yes, PICT has won several awards for its performance in various sectors.->END"}

While experimenting in the Playground, I’m receiving irrelevant results, even when my question is ‘What is the goal of PICT?’, or when it matches the prompt exactly!

I am having the exact same problem. I have followed numerous tutorials and get totally inaccurate responses, even when copying the exact “Prompt”.

  "prompt":"What is the 'M02' table used for?",
  "completion":"The 'M02' table is used for storing Current, History and Deleted Jobs###"

When asked “What is the ‘M02’ table used for?” it replies
“the ‘m02’ table is used for storing jobs diary attachments/images”

I don’t know if something has changed recently for the training process, but it certainly does not appear to work correctly now.

Welcome to the Forum!

Fine tuning is best used for teaching the model new ways to think, new patterns to recognise, it is not an ideal solution to teach the model new data or facts, you may get better results using embeddings for that task, you can see the documentation for that here :OpenAI Platform

Thanks for your reply,
I actually came to the same conclusion while doing some research this afternoon.
:grinning: :+1:

1 Like