How will finetuning 3.5 turbo / GPT4 differ from the current base model finetuning?

I’m trying to imagine/theorize how the upcoming 3.5 Turbo/GPT4 finetuning will work…

I’m presuming they’re referring to finetuning the instruct model versions, correct?

How will change our datasets, will we be formatting our finetuning datasets in a similar fashion to our prompts?

I’m trying to get a leg up in collecting/formatting my data for the upcoming feature.

1 Like

I’m pondering that too.

The skill of ChatGPT models is the potentially millions of human feedback training examples. How can one overcome that? If your training is given much more weight, how easy will it be to simply break all the skills we know?

We can quess that the ChatML format will continue, although completion model replacements (or ones that will follow directions to complete) are also announced that could be the basis of tunes. If working on training data, I would ensure its turn contents can be easily processed into any container or role present or future.