Best practice around fine tuning?

Hello! I own the site, which has received ~350K GPT-3 API requests in just a couple of weeks since launch.

The website provides the recommended Excel formula for a given problem someone is trying to solve.

There are some common themes on what the davinci-002 consistently gets incorrect, like not being able to delineate “contains” versus “equals”.

I successfully uploaded a sample jsonl file, but I’m running into issues where the response is repeating the prompt: “Create the Microsoft Excel formula for the following problem: [dynamic entry]”.

Additionally, I’m curious… roughly 20% of the formula requests are deemed “incorrect” by the user (and myself, after auditing). Is it best practice to only upload records that were previously incorrect via the davinci-002 model then manually corrected? Should I include the ones that davinci-002 got correct, as well?

Thank you in advance! If this is asking for too much, I understand. I’d be more than happy to lean on someone for consulting.

Thanks, David

Since you’re getting user feedback, all you need to do is accumulate correct responses and you’re golden.

For the repetition, you need to add a STOP token

You only want high, quality, correct examples. Their source (synthetic or non) doesn’t matter

1 Like