Has anyone had success fine tuning 3.5t with RAG based examples in the system prompt?
Use case:
Have a problem that’s working reasonable well with RAG based examples via recommendations in GPT-4. I’m wondering if I can fine tune away the need for the examples by fine tuning 3.5t or would I be better off fine tuning away my static prompt and continuing to use examples in the user prompt of the fine tuned model? What factors are good to consider in this decision?