Fine tunning a model to use a base context

I have a faqs (frequently asked questions) and want to create a chatbot to answer questions about that faqs. I have tested with embeddings and it works well…but doesnt have sense to send always the embeddings adding the token cost for a simple question

So I have tested with fine tunning a model, where I finne tunning sending a file with the question in the prompt and the answer in the completion…since it is a basic faq it has arround 50 entries…but doesnt work, when I ask something to the new model it doesnt use my faqs to answer…

I have searched about that and it is explained in different ways: context, memory …and speaking that currently is not possible if you dont send the context inside the new prompt

But I dont understand so if anyone can help I would be grateful.

Why I cant to create a new model with more knowledge? if the davinci model understands if I sent in the context “The price of the service is $15” or summarize a text…why i cant to create a new model equals to that but with those knowledge inside so when I start it and ask about any question of the previous summarized text it answers using it?



Hi Oscarr!

Have you made some progress about the question above? I’m facing the same questioning about that.