Hello! I’ve been exploring the newly launched Assistants features and I’m curious if the context provided for each Assistant is similar to fine-tuned training for specific entities.
Can an Assistant also incorporate fine-tuning files or training, or are they mutually exclusive? Could someone clarify this for me, please?
5 Likes
Following this thread! Would be interesting to get more info on how assistants (with context and files) compare to a finetuned model. Seems like assistants might be a better way to go since they allow for multiple tools to use!
3 Likes
+1
I just setup a support agent assistant to read my knowledge base. Its really close to being accurate - and I feel like if there was a tuning option it could get there quicker.
Right now, I’d have to rewrite many articles to make them more AI friendly at the expense of being less clear to humans
+1
This feature enables the setup of an assistant with file retrieval AND a finely tuned model to generate the answer.
Often, the documentation from OpenAI itself explains that fine-tuning and data-retrieval are complementary approaches.
However, I still can’t see how I can leverage both of them in the same Assistant.
1 Like
I’ll try to do the code now i don think it could hurt to much in the pockets if the systems are complementary.
The solution i would implement is to have three open ai assistants, one make function calls to to:
- generate model training data
- validate the training data
- create the model
To create assistants - https://platform.openai.com/docs/assistants/overview
I’ll go read up on fine-tuning now
https://platform.openai.com/docs/api-reference/fine-tuning/create
P.s
Im tapping out early. Seems to be for commercial use. It says we need samples of a minimum of 50-100 high quality examples to train the model in batches.
It suggests focusing on getting good at prompt completions, which i don’t have yet, that will translate into the example data for training a model.
I figure when i get 200-300 specific prompts I’ll give it a go