When I use RAG, I provide a retrieved context to the LLM. The LLM them picks up the vocabulary(mostly) from that context. How does the LLM know to use the vocabulary from the context. Does this overlay the parametric vocabulary and re-weight them.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Is LLM a must in Retrieval augmented generation? | 1 | 1876 | December 25, 2023 | |
RAG document chatbot questions | 0 | 54 | August 9, 2024 | |
Difference between langchain and rag | 2 | 12545 | November 27, 2023 | |
Assistant Retrieval method and RAG (are they doing same?) | 6 | 6751 | January 1, 2024 | |
How to know whether RAG data (json files, no db) was used? | 5 | 721 | April 7, 2024 |