Retrieval augmented generation working

When I use RAG, I provide a retrieved context to the LLM. The LLM them picks up the vocabulary(mostly) from that context. How does the LLM know to use the vocabulary from the context. Does this overlay the parametric vocabulary and re-weight them.