Contextualized Embedding: Get GPT3 Embeddings of Each Token in a Sentence

Say, I want to get the contextualized embedding of the word love in the sentence I love her. How can I do it via the GPT3 embedding API?

Assuming there is a sentence of T tokens and the model output dimension is D, then in BERT or GPT2 model, we can get the T x D embeddings so the corresponding embeddings of the targeted word’s token(s) are the contextualized embedding. But the GPT3 Embedding’s API only give 1 x D embedding of every input. Therefore, it seems that there is no way to get the contextualized embedding of a specific word in a sentence?


Exactly my question. Also BERT has vocab.txt which is a list of all tokens, what’s the counterpart in openai embedding?