The skill of ChatGPT models is the potentially millions of human feedback training examples. How can one overcome that? If your training is given much more weight, how easy will it be to simply break all the skills we know?
We can quess that the ChatML format will continue, although completion model replacements (or ones that will follow directions to complete) are also announced that could be the basis of tunes. If working on training data, I would ensure its turn contents can be easily processed into any container or role present or future.