Custom GPTs vs Fine tuning, what's the difference?

Hi everyone,

Like everyone else yesterday, I followed OpenAI dev day & heard that a new feature allowing to create custom GPTs was about to be released. I took a look at the different articles & docs related to this new feature but did not find much information.

I was wondering if someone here had an idea of the difference / use case between a fine tuned GPT model & the recently announced custom GPTs?

Thanks a lot

7 Likes

I have a same question. For me it sounds like from the publicly available information, GPTs for GPT, is a finetuned mini GPT from the GPT-4. is it true? how it is diffrent than Fine tuned chatGPT?

Yeah I’m a bit unclear on this as well.

1 Like

Hello everyone !

Hopefully my answer will be helpful.

If I explain simply:

  • Fine-tuning is giving new knowledge to an AI by retraining it. You « feed it » new data that will now be part of the AI, therefore changing the core of that specific AI. Just like if you read book - it’s somewhere in your brain.
  • Custom GPTS instead are all based on the same AI model that remains unaltered. You « just » give the GPT instructions and documents, they can be modified at anytime to update the GPT to your convenience. Like a human wearing a pair of glasses, it changes the way it sees the world, but not the person itself.

The one misleading difference between fine-tuning and GPTs or other agents/bots/AI (they can be called in a multitude of ways) is that in both case you give it new data. It is really in the way the data is used by the AI that makes the big difference. In the first case the AI is modified in its core, while in the second case it is really about providing instructions to guide the existing AI (without modifying the core).

Why do you want to use one or the other ? Context & cost. Both methods aim to specialize an AI but the outcomes are different. Fine-tuning is more complex and expensive, you need new quality data that will lead to consolidation of knowledge for a tiny part the AI systems - you literally improve the system in a very incremental way (if done correctly). While the custom GPTs approach is much less expensive and more accessible (no-code/low-code - everybody can do it). You do not improve the system but you activate the proper part of the AI brain to get the best out of it.

Personal thoughts & considerations - customs GPT to me appear as a knowledgeable librarian. I can give it specific documents if I want to and I can even instruct it to only respond using these documents. That can be amazing to create specialized assistant using a specific book, framework, knowledge base - whatever you think of that the AI can understand. That is even better when you consider how easy it is to update them, just change the instructions or the documents. Eventually, when you have improved enough a GPT and that the nature of your use-case is relevant (not an info changing frequently - fine-tuning is not an update), it might be considered for fine-tuning.

6 Likes

Hello! thanks for your answer.
My questions is:
I have seen some figures in which the fine-tuning only updates/trains the specialized output head and not the core. Therefore, my question will be: isn’t this case a form of a customized GPT?
Thanks in advance

custom gpts just edit the system message and the function instructions (called actions here), and adds some basic RAG functionality maybe.

the system message is the same as a user message and an assistant message, but the role is “system” - GPT-4 is supposed to listen to that more than to user input. these are just instructions.

a fine-tune actually changes the weights of the model.

maybe you can technically theoretically potentially achieve similar results with both methods, but the “Customized GPT” product is not using finetunes.

hope this helps

Hello !

Diet is right, GPTs and fine-tuning might lead to similar output (for simple task using general knowledge).

I must emphasize that Fine-Tuning is only useful in certain cases. Like you want to retrain a model (like GPT-4) to know how to manage customer service, but in the way your company does, it infuse your organization’s culture & practices in the fine-tunedmodel. Of course, a custom GPT is a good option, particularly for prototyping. I personnaly create GPTs to test ideas at my job (not a dev) and make knowledge accessible, most of the time is it sufficient for my colleagues to use the chatbot. If the need requires deeper understanding and nuances, then fine-tuning is considered.

If that can open your mind, the trending topic in AI now (IMO) is the improvment of planification (not talking about Q* here). That is interesting as you would have a broader model (like GPT-4 or even better) that will determine the path to solve your problem and then mobilize smaller expert model (fine-tuned possibly or GPTs) to contribute to the task. If that is interesting to you, look for “agents swarn” and Robotic Process Automation (RPA). This is how AI will take place in our life.