What is the difference between Assistants and Fine-tuning? And can I train the Assistants model first and then use Fine-tuning?
Assistants is a high-level agent framework API by OpenAI. Basically adds some functionality and behavior on top of the OpenAI API.
Fine-tuning is the process of giving a model additional training related to how it should behave (but not what it needs to know).
Well, okay, I got the documentation right. But another question is, can I use trained “Assistants”, which I just tested at first and realized that at some points the bot does not respond the way I need, I can use “Fine-tuning” to train the bot?
- You need to clarify what exactly you mean by “trained assistants”, because there is no such generally used terminology (at least yet).
- Assistants in general are in beta and it is hard to have them answer the way you want.
- You can fine-tune a model, not the bot.
P.S. Man, you freaked me out!b When I got a notification about your answer with quote in russian, I thought I started to speak russian in the forum
Oh. I used the translator on the page, and forgot to turn it off. That was funny.
Transmit your data in such a way that the model is based on the information I provided when responding
I understood everything. Thanks a lot for the reply
Happy it helped Good like on your jouney!
Written under 20 hours ago…
Hey, I was wondering if you could share what your solution was. Im looking to train a gpt model on industry specific data and get specific behaviour on replies.
I was thinking about it the same way as you, first train the assistants on the data then use fine tuning to get the replies I want.
Does the assistants feature use both fine-tuning and RAG under the hood, intelligently as appropriate? So basically does it make it less necessary to implement fine-tuning and RAG from scratch?