I’m looking for guidance on how to create and fine-tune my own GPT model using the OpenAI API. My goal is to train the model with a specific dataset to adapt it to a specialized domain. - in my case, as i am a general practitioner (GP) it would be healthcare - this case I’d like to ensure that the model can consistently access and utilize this data without needing frequent updates.
Could anyone provide advice on the best approach for fine-tuning, recommended tools or practices, and how to set up a seamless integration for continuous model use? Any tips on optimizing model performance for domain-specific tasks would be highly appreciated, i am not a guru in informatics.
OpenAI has a number of great resources to get started with fine-tuning, both guides and practical examples. Compiling a list of key resources in the following:
Core guide to fine-tuning, which details when to use fine-tuning, the models eligible for fine-tuning, the required data set and data format for fine-tuning, the different parameters to consider as well as how to use and optimize a fine-tuned model
Guide on accuracy optimization which compares and contrasts different approaches for optimizing LLM accuracy including fine-tuning, retrieval-augmented generation (RAG) and prompt engineering - it’s great to get a better understanding when fine-tuning is actually suitable
API specifications for fine-tuning which breaks down the API requests for creating a fine-tuning jobs and other related operations as well as has further examples on the training data formats
OpenAI cookbook example for how to carry out fine-tuning in practice, exemplified for an entity extraction task
OpenAI also has a very intuitive user interface for fine-tuning. So in practice, in order to fine-tune a model you don’t even need to write a line of code if you don’t want to
In any case, once you’ve had a look at these resources, just let us know if you’ve got any further specific questions.
Finally, can I just clarify: what is the main objective you are looking to achieve with fine-tuning?
P.S.: There’s two days left of benefiting from free training tokens for fine-tuning gpt-4o and gpt-4o-mini.
Hi Jen and thank you for the Answer.
Answering your question " what is the main objective you are looking to achieve with fine-tuning?" - i would like to create my own GPT, whereas i would be able to learn it some material (healthcare-GP) so that i could always ask it about that stuff, and no need to load it up every single day. I am not a tech pro, and as i understood i would need a fine tooning in order to achieve this goal.
Thank you for your support
regards
Erwin
Well, in that case I am sorry I have to disappoint you but fine-tuning actually is not the right approach. Fine-tuning is not designed to inject new knowledge into the model. Instead it is intended to get the model to respond in a certain style, format or to get it to execute tasks in a specific way.
You should be looking at Retrieval-Augmented Generation (RAG) instead.
To better put all of this in context, I again suggest reading this particular guide on accuracy optimization which discusses which approaches are suitable depending on the objective you are pursuing.
You have a couple of different options available in an OpenAI environment for how to achieve your goal. If you are looking for a straightforward, no-code solution, then custom GPTs are the way to go. They allow you to upload knowledge files and then you can ask specific questions.
A more advanced option that eventually will require coding are OpenAI’s assistants. Here you also have an option to upload files for storage in a so-called vector store, which then form the basis for generating a response to a specific question.
Note that in neither of those two cases the model actually “learns” new information. Instead, information from the knowledge files are retrieved using semantic search and then incorporated into the context when the model generates a response to a given query.