What is RAG? https://youtu.be/T-D1OfcDW1M?si=Zh08QOongbBXYAhm
Once you have a RAG system up which allows you to send queries to an LLM and receive responses, you can connect this to email. Essentially, you create an API, which will receive emails as prompts and return LLM responses.
For receiving emails, I created a Zapier mailbox (this is used to receive the physical emails) and Zap to process send the emails to my API. My API is plugged into my chat completion (RAG) system, so it does the vector store cosine similarity search, brings back context documents, sends documents along with question to the LLM, and receives the LLM response. It finally creates an email response and sends that back to the original sender.
Here is the 2nd half of a use case scenario I created to explain how to use our query responder: https://youtu.be/nBXZLxQEW7A?si=302r3alQfFoiU51q&t=88
Here is the initial discussion I posted on the subject: Query OpenAI Large Language Models via Email