How does GPT-3 text embedding come from? Only by generative pre-training?

I am study the principle of text embeddings.

I find that generative pre-training can bring some text similarity.

But I am not sure. So the question is whether the text embedding of GPT-3 is from the pre-trained model without fine-tune.

Thank you very much!

1 Like

Since similar words tend to be used in similar contexts, the embeddings often reflect text similarity, but to perform better for a specific task such as search and classification, it is more likely that fine-tuning will be considered. I am not in a position to comment on GPT-3, but in most cases fine-tuning is the first step to get the result.