I am looking to fine-tune GPT-3.5 for a specific application involving code generation. However, due to the specific nature of the application, there is a high likelihood of errors in the generated code. To overcome this, I plan on fine-tuning the model using specific documentation and programming books that utilize natural language. I am having difficulty formatting the data in the required format of “prompt” and “completion” pairs. Can you provide guidance on how to properly fine-tune the model using this type of data?
Yeah, i too would like to know how to just feed it data, instead of the (question/optimal answer format) …there is no optimal answer if i just want it to read a Wikipedia article