A week back, I was able to finetune GPT 3.5 turbo 0613 with the {"prompt": "<prompt text>", "completion": "<ideal generated text>"} {"prompt": "<prompt text>", "completion": "<ideal generated text>"} {"prompt": "<prompt text>", "completion": "<ideal generated text>"} template.
Now, after a week, I am not able to use the same type of training style. It says “Error creating job: File ‘file-OYCuxihesVrzixeHkm39l’ is in prompt-completion format. The model gpt-3.5-turbo-0613 requires data in the chat-completion format.”
hi there - my last finetuning using GPT 3.5 turbo 1106 was last Thursday. I used the following schema for my training data: {“messages”: [{“role”: “system”, “content”: “input”}, {“role”: “user”, “content”: “input.”}, {“role”: “assistant”, “content”: “output”}]} {“messages”: [{“role”: “system”, “content”: “input”}, {“role”: “user”, “content”: “input.”}, {“role”: “assistant”, “content”: “output”}]}
This worked fine for me.
Update: also just tested it briefly again by starting a finetuning job and it still worked with this approach. Good luck!
Yes, you are missing something: What would be the point in training an AI model to respond to inputs that you can’t actually provide to the model in use via the API?
Prompt/completion is for completion models that don’t have messages tightly wrapped in the containment of special token “role” containers of ChatML.
completions are depreciated and set to be removed in January. Maybe they are just getting ahead of themselves. Best to go back to the API docs and update as needed.
gpt-3.5-turbo-instruct cannot be fine-tuned, although that might be high on a wishlist. So that would not be the source of perceived change.
Base models babbage-002 and davinci-002 take the prompt/completion training file format. (and no, their endpoint is not being shut down).
OpenAI may well have left a training file option open that would result in mis-training gpt-3.5-turbo, for those that went against documentation, but you can thank them for fixing it.