Why doesn't the OpenAI API provide a fine tuned response?

We have successfully created a fine-tuned model, yet we got an incorrect response upon prompting a known question.
Any advice would be much appreciated.

Here are the steps we have taken:

{"prompt":"Where is TEST bank located?","completion":"123 Finance Street,Banking City, BK 12345 United States"}
{"prompt":"What types of accounts does TEST Bank offer?","completion":"checking, savings, business, etc."}
{"prompt":"What is the interest rate on your savings and checking accounts?","completion":"0.60% APY"}
openai tools fine_tunes.prepare_data -f TEST_Bank_Data.json
>> modified file to `TEST_Bank_Data_prepared.jsonl`
openai api fine_tunes.create -t "TEST_Bank_Data_prepared.jsonl" -m davinci
>> succeeded
openai api completions.create -m davinci:ft-chang-2023-05-23-02-54-08 -p "Where is TEST bank located?"
>> Where is TEST bank located? Northwestern Mutual and Vanguard do not have branches in all states. Some states,

I could be wrong, but I don’t believe the fine tuning is designed to give factually correct answers. It works for biasing it towards giving certain kinds of answers, e.g. responding politely when someone cusses at it, or giving answers in a certain language. It would likely be prone to hallucination.

There’s tools like Voiceflow which can gauge intent - someone can ask variations of “how much interest” and you can have it respond a canned “savings and checking accounts is 0.60% APY” or GPT a variation of this answer.

There’s the various Chat with PDF type of approach, which seem better suited to customer service style of responses. Load facts into the PDF and try to have it respond from there?

If you want answers based on specific data you should try embeddings

Fine tuning is for structure not the content.


I’ll give it a try.
Thanks for your guidance.