Future of Fine Tuning models

Hey all,

Being that none of the gpt models (gpt 3.5 turbo, gpt4, etc) can’t be fine tuned yet, do we think the “new” fine tuning process will be analogous to how the eval process works? The eval process feels like a “easier” process to fine tune with the system prompt, ideal, etc. thoughts?

I like the old/current process, not a big fan of the new eval process since it is tied to GitHub.

-My 2 cents.

2 Likes

Assuming less about the formality of GitHub but more so the structure of the eval process and it’s design

The .jsonl files appear identical between evals and the current method. The behind the scenes inner workings are a black box to me. It seems like the evals are more used for RLHF, which, if that is what you are getting at, then, maybe? What process would you like to see?

Personally waiting for Einstein, or whatever they will call the next-gen “E” model that follows the “D” model DaVinci. But not even sure if that’s where they are headed either.

3 Likes