Embedding vector and weights/bias relationship?

I’ve been using the text embedding service for a while now in my own RAG systems. How is this vector related to the weights/biases in the transformer neural network (NN)? I’ve read some posts that say the embedding vector is a easier-to-deal-with representation of the hidden layers of a NN. But then I’ve read others that say the embedding vector is totally independent from the weights/biases.

Any insight from NN experts greatly appreciated.