Fine-Tuning vs Prompt-Engineering vs Plug-in | each suitable usage

After reading numerous search results,
it appears clear that “”“fine-tuning”“” involves updating a language model’s parameters,
while “”“prompt engineering”“” entails temporary learning during inference.
Some business applications may not require fine-tuning or prompt engineering,
as ChatGPT’s function can be achieved by simply “”“plugging in”“” data from the company’s database.

With an appropriate “”“prompt”“”,
we can obtain excellent answers similar to those obtained from fine-tuning the model.
Additionally, based on my understanding,
“”“plug-in”“” services may produce results that are almost identical to those obtained from fine-tuning with a specific dataset.

However, I am still curious about the circumstances
in which fine-tuning, prompt engineering, or plug-ins are more beneficial.
(e.g. which biz cases, which IT resource conditions, what data size, etc.)

In conclusion,
I am seeking to understand the optimal conditions for each of the following methods:
fine-tuning, prompt engineering, and plug-ins.

Thank you for your assistance.
Best,
JW

2 Likes

Great questions!! I am also researching on this!!
If anybody would provide tips or recommend good reference articles, it would be really appreciated~!!

1 Like

Fine-tuning : Fine tuning is best for a scenario when you will be using GPT for a specific task repeatedly, where it might not need to learn much aside from the fine tuning you might have done. However, with fine-tuning only possible on lower models, the quality of the output might be more in line with the fine-tuning rather than the innate model learning, while the quality of the language structure of the generation might not be as good as the latter models.

Prompt engineering - Prompt engineering offers more flexibility in terms of the getting an output from GPT. While the output might not be more deterministic even at lower temperatures, the prompt can be changed and tempered with at whim. With a 8000 token window, some context can be maintained as well as well as being able to use newer models.

Plug-in’s : Don’t have much experience with them, so won’t comment

I do have some experience with “plugins”, so expanding on what UDM17 said:

“Plugins” can provide models with access to external information, such as allowing a model to search the web, use a calculator to solve problems or access a company’s database. The use cases for plugins are pretty much endless but a very common use case is when you need the model to have access to a company’s database.

I deliberately put “plugins” between quotation marks because although openAI will soon be releasing the chatGPT plugins (some people might already have access), the functionality of these “plugins” can already be achieved. It is basically just hooking the model up to some kind of external tool. You could for example let the model trigger a piece of code that searches a database when the user asks something about a company.

Plugging in external tools is by the way not at all similar to finetuning. With finetuning, the model won’t “learn” anything new. It will just learn to respond in a certain way. (For example your style of writing). By plugging in external resources/tools it also won’t learn anything but it will have external knowledge available. It is similar to giving a student his or her textbook during a test. Without it, he or she might not do very well and make things up if he or she doesn’t know the answer. With a textbook, he or she can look for information before answering.

I hope this helps!

2 Likes

I like the student analogy you started. I often like to think of LLM as a small child or a young student.

With that, let’s say the student is giving an exam for a certain subject.

Plugins: Giving student the textbook for the exam. They need to understand the exam question, figure out what information to look up in the textbook, find it, process it and then answer the question.

Prompt Engineering: Exam is full of paragraph based questions. You are giving the student a specific piece of context on the subject and asking them to use their language and reasoning abilities to answer the question. It’s on you to give them the right context - the answer must be in the paragraph.

Fine tuning: You are coaching the student the night before the exam. Making them an expert in a certain topic. Potentially at the expense of decreasing their skills in other areas. For example, if you prep them for spelling bee, they might get really good at that, but might get slower at math.

Just like for a student or a child, one or more of above can also be combined.

2 Likes

Fine-tune → Categorization, Filters
Prompt Engineering → Live info, your own facts, aka embeddings
Plug-ins → Extending the AI to another SaaS offering, or the generic built-in ones.