While fine tuning we give the jsonl formatted text with hyperparameters like epochs
But i was wondering which embedding is used in ternally
Please help me understand this
Do you mean the embedding model or the fine tunes?
Fine tunes are done on based models, i.e. the ones with no “instruction following training” i.e. not text-davinci-003
but davinci
Embeddings are typically performed with a text embedding model such as text-embedding-ada-002
I know that fine tuning is done on davinci but we dont convert text to embeddings ,just directly give text
Perhaps I misunderstood your question, can try to ask it in a different way?
When we train or fine tune any language model we give embeddings to the model but in open ai davinci we don’t give embeddings while fine tuning why is it so
I think the system is meant to be as simple as possible for new users, so just giving it new prompt/reply pairs is simpler.
Yes it is simoler but how the model is interpreting thoss prompt reply pairs without embedding
Exactly what goes on after that point has not been made public, so nobody knows.
We have embedding separate and fine tuning separate .At a time we can play with embedding for using base model and only fine tuning while fine tuning
Whereas in bert we can even see the layers and tweak embeddings while fine tuning