I’m trying to build my own chatbot for food ordering client service,
and i have my own restaurant menu that i had convert it into .jsonl training and validation files and i have already fine tune the gpt model, but when i use my trained model i ask him about the restaurant name and he respond that he doesn’t know. why? and how can i make the fine tuned model expert by answering my questions that I only trained him on?
Welcome to the Dev Community!
This is a common mistake, here’s what the API Documentation has to say on it:
When should I use fine-tuning vs embeddings / retrieval augmented generation?
Embeddings with retrieval is best suited for cases when you need to have a large database of documents with relevant context and information.
By default OpenAI’s models are trained to be helpful generalist assistants. Fine-tuning can be used to make a model which is narrowly focused, and exhibits specific ingrained behavior patterns. Retrieval strategies can be used to make new information available to a model by providing it with relevant context before generating its response. Retrieval strategies are not an alternative to fine-tuning and can in fact be complementary to it.
You can explore the differences between these options further in our Developer Day talk.
An easier way to think of it is that fine tuning is for changing how it talks, and RAG is for changing what it knows.
For your purposes you’d probably be better off with the Assistants API, you can upload a .txt file with your menu and fiddle around with the instructions to get closer to what you’re looking for.
Just keep in mind that these systems are far from fool-proof and there’s always the chance it might offer something for free if someone gaslights it hard enough.
is there any code or method so i can fine tune openai assistant with my data ?
and code to run it on my local machine