Embeddings with domain specific texts

I have a question about word embeddings with highly domain-specific texts:
As I understand embedding relies on the data the model was trained on.
If I want to build a Q&Q system based on domain knowledge, that wasn’t presented to the model during training - will I still get decent enough embeddings for the question and answer pairs?

Yes you definitely will.

1 Like

:slight_smile: thanks for the answer that’s encouraging.
And as I understand that would be preferred to using fine tuning. Right ?