Since it is trained to learn the semantics of sentence, I assume that it is an encoder only model. But thought to verify this to get an accurate answer
That is correct, it encodes your text as a length-1536 vector, there is no way to recover the original text from the embedding.
There’s another possible question here that is much deeper than the operation of an embeddings - how is the untagged pretraining data and generated training processed by the transformer. GPT-3 is a decoder-only language model. It doesn’t have an additional semantic transformation encoder in the algorithmic kernel, in contrast to other early language models.
But yes, “encoder”, in that it turns language into tensors.
Yes that is still a mystery to me. How does GPT-3 which is a decoder only model can do translations also. I have not been able to get an answer that helps me to understand this. I am still researching on this