Fine tunining ada model but giving sometimes incorrect info

fine tuned the model ada with 1000 pair of que ans and sometime if i am asking interest rate for axis bank it give some other bank interest rate from the dataset,
but sometimes it is giving correct answer.

i have used epohs 11

if i am writing question like “interest rate in axis bank” than it is giving corrct answer

but if i am writing “axis bank interest rate?” than it is giving me some other ban rate

why is this happening

why don’t you use retrieval augmented technique. fine tuning may be messing up the weights and biases

is this techinque avalible from openai?

no, you can use it with any model . Infact now that i m thinking on it again, i think you can use agents and tools for this . Look at the function as tools feature. You can define a function that takes the bank name and returns the interest rate. LLM can then call that function