Can someone explains to a 5yo what embeddings can do that fine-tuning cannot do? What are tasks that they can both do? And what are tasks that one can and the other not?
OK, for a 12 year old maybe?
Fine tuning takes the reply of a ChatBot and changes it.
Embeddings represent text as numbers so we can search. select, classify or group similar text.
The Embeddings API returns a vector - a list of numbers like this:
[ -0.006929283495992422, -0.005336422007530928, ... -4.547132266452536e-05, -0.024047505110502243 ],
It measures the relatedness of strings. You can use it to implement services like recommendation engines, search, and classification.
Fine-tuning returns text bot response improved by providing examples.
I recommend this post to learn more about embeddings : Introducing Text and Code Embeddings
Is it correct that fine-tuning a model does not change the code to interact with the API other than to change the model to be used? Whereas to take advantage of embeddings you can no longer have very simple code which merely uses the API to present a prompt and get a response, but rather with embedding you have to do the cosine calculations and a lot more overhead to achieve a response from the model?
Yes, you can see sample requests here: OpenAI API
I recommend this notebook: openai-cookbook/Question_answering_using_embeddings.ipynb at main · openai/openai-cookbook · GitHub. It perfectly describes why Embeddings are useful and what an example implementation looks like.
Thanks all. Isn’t it true that you can formulate every search problem as Q &A?
Search: Bill Gate’s cars make.
Question: What is Bill Gate’s car make.
If so, when do you use embeddings and when to you use Q&A for the same problem.
This thread answers my question: Finetuning for Domain Knowledge and Questions - #8 by ic202
you can check out this video you can check out this video OpenAI Q&A: Finetuning GPT-3 vs Semantic Search - which to use, when, and why? - YouTube