InstructGPT and fine tuning

Hi,

I am looking for information on fine tuning and InstructGPT. I’d like to know what the differences are with the previous version of GPT and in which cases it’s better to fine tune as to prompt engineering.

Is there any info out there on this topic?

From my personal experience, it is possible to achieve results with n-shot prompts using Davinci that previously required a custom trained (fine tuned) model.

I would consider fine tuning Davinci to improve consistency of results, increase speed, and to a lesser extent, reduce cost.

But if cost and speed are very important considerations it is worth testing a fine tuned Curie model to see if you can achieve the desired results.

Do read how instructGPT is better than GPT3 by labellerr

Is there any documentation on prompt # words to use to write better functionality inside my GPT?