Do you think that the ability to fine-tune GPT 3.5 and 4.0 is coming soon? Very frustrating that we can only fine-tune the older, more expensive, less capable models.
I at least hope so!
One way to do a bit in this direction would to submit some context with the API in front of the User-Request, works well when you have around 3-4 Pages with text. Have you considered this option allready?
Yes I do try to use prompt engineering before fine tuning but it’s easy to hit the limits of it, especially with GPT-4 not available as an API yet (for me, at least).
It seems with the extra emphasis on “safety”, they’re killing off fine tuning. It seems OpenAI is getting more Closed by the day. GPT-4 doesn’t even have a public API. Seems like you ought to just use Llama or Open Assistant if you need fine tuning these days.
I have a novel solution to this idea! Well…it WAS novel about 2 weeks ago, but ChatGPT plugins, and that recent GPT-4 paper covered most of what I figured out. Either way, check it out at www.glassacres.com and take a look at my rudimentary architecture for Recombinant AI and Modal-ID’s.
I think I’ve got a pretty good solution to creating a mini fine-tuning environment.