Say, I want to get the contextualized embedding of the word love
in the sentence I love her.
How can I do it via the GPT3 embedding API?
Assuming there is a sentence of T tokens and the model output dimension is D, then in BERT or GPT2 model, we can get the T x D embeddings so the corresponding embeddings of the targeted word’s token(s) are the contextualized embedding. But the GPT3 Embedding’s API only give 1 x D embedding of every input. Therefore, it seems that there is no way to get the contextualized embedding of a specific word in a sentence?