Function calling vs. Fine tuning vs. external embedding vector files... PROS and CONS?

So I am exploring different tools available in openai to create a bot that can answer about additional information that is given to him.

The question is… what would be the best way to introduce information to the bot and have it answer it properly?

I haven’t even taken a look at pdf document fed to the bot…

I would think at the end of the day most applications require / want additional data to be fed to the bot, and it seems like there are so many options… other than this being a newer technology and things keep changing… what would you say are the pros and cons of each options, and/or when would you want to use each option?

A few weeks ago when I started to investigate about AI I asked how can the bot answer about detail questions about a restaurant for example, and I then I was introduced to vector files and embeddings (which makes sense how the technology would work), but since then I have seen other options as well.

The basic rule is that any time you think fine tuning is the way to go you will almost always be incorrect unless you understand what it actually does and you should instead use embeddings.

A simple analogy that made it click in my head when I first encountered them:

If you fine tune a model on an authors works, the resulting model will be able to write new books in the authors style but will know almost nothing about the authors past works. Embedding the authors past works will allow you to pull back passages from the author that have a semantically similar meaning to any given input query, this can then be used as context to improve and narrow down a models responses.

1 Like