Ok, here’s a very basic question as I have had very mixed responses to prompts & trials
Let us say I use the Assistants API with file search and some of the files describe a particular training model that I have while other files have some data on what i offer as a commercial service
If i prompt it saying that it has to use the training model that is described in the attachments and use the information on my services while replying to the user, it is a kind of training/fine-tuning without the fine-tuned data, right?
Not really - fine tuning specifically refers to using machine learning to modify the weights of the artificial neural net in a specific way.
Prompts (or RAG) don’t change the weights.
If your documents contain a bunch of examples that are contextually retrieved, it might be more fitting to call it ‘in context learning with RAG’ or something, if you need some marketing buzzwords.
I should maybe add that it’s my opinion that you’re not missing out on much by not using fine tuning. Proper prompting and retrieval beats FT almost every single time with very large models.