How effective it is to finetune GPT 3.5 T, does it shows any significant change over few-shot learning? I have finetuned the model but coudnt differentiate between finetune response and normal one. I used 50 datapoints.
How many datapoints are needed in order for model to output better response ?