As GPT-4 and GPT 3.5 Fine tuning are more completely rolled out, will there be more options to fine tune in the original Prompt-Completion format again? I feel like this format being phased out with Davinci, Babbage, etc. is frustrating because there are many use cases where this format is much more efficient in terms of training loads and costs. Now to achieve the same effect I have to expand my training data by almost a factor of 10 for what seems like no net benefit.
babbage-002 take normal completion prompting as a string of plain text, and there should be no reason for training them via examples and endpoint that enforces a “chat” message format (that actual usage won’t provide).
Fingers crossed that the upcoming instruct model doesn’t use the chat structure.
understood. It was the chat message format that I did not want. Essentially at this point, training any model on the prompt-completion format would be a waste of time since the only models currently that support that will be discontinued. The prompt-completion format suits what I am trying to do much better and would be so much more efficient for what I am trying to train.