So, right now I have a document and stored its vector embeddings in an index. When i query the document with keywords, it performs well. When I query it with indirect questions, it fails to do so since im assuming it doesnt find what was asked said in the document. Any ways to fix this?
Also, how can I combine a chatbot with RAG. For example if I ask a question about a document, it retrieves it and passes it to the LLM to give a response. But if I ask a completely unrelated question which the LLM can still answer, it fails to do so since its not in the document at all? Is there a fallback or something to almost ask the raw question directly to the chatbot if it couldnt find it in the document?