Where can I see the embeddings used for the tokens that a given GPT model uses? (Not the embedding for an arbitrary prompt supported by the embeddings
endpoint (using text-embedding-ada-...
models).
Or are the embeddings I get back from the API (if I were to supply a token rather than an arbitrary string) the ones learned by GPT.
I’d guess the answer is that one cannot retrieve the embeddings used by GPT (since they are a valuable part of what the model learns), but that then begs the question: What are the embeddings returned by the embeddings
endpoint? In what sense are they relevant to GPT?