Hey all!
I’m looking to finetune GPT to do a task for me with very little to no random deviations in its responses. I have 100 prompts and responses ready to finetune with, but if I train on so few examples, will I still get the pesky random responses? Is it worth using GPT to generate more responses and hand pick them to add to the training set?
Thanks in advance!