I wanted to check if others are also facing the same issue.
When we have small number of documents, the embedding fetches n number of docs based on a threshold and then we take top-n out of them which has the potential answer for the question. When the number of documents increase, the chunk with the answer gets pushed down to lets say n-k position and when we take the top-n chunks, that chunk is left out. I can increase the top-n but that is not a sustainable solution, it will fail somewhere else.
Do you have a broad corpus spanning a variety of topics? Consider maybe splitting them up, and apply a stratified embedding search.
Does your corpus include documents in multiple languages? Consider harmonizing language and style before storing your embeddings (for recall, it may not be necessary to harmonize your query, but YMMV)
I think the best thing you can do is look at your highly ranked documents and ask yourself how and why they don’t answer the query - and then adjust your approach around that. This is an iterative thing that will likely need to happen at regular intervals as your corpus grows.
You may want to consider adding a re-ranking model—such as the one from Cohere—into your RAG pipeline.
According to synthetic benchmarks it drastically improves the hit-rate of retrievals.
Another approach might be to use another set of embeddings produced with a different embedding model in tandem with text-embedding-ada-002 then merging the results with mean reciprocal ranking.
Next, if you’re not already, you might consider using hybrid-search where you first filter the possible documents by keyword, date, or some other parameter to ensure the semantic search is only considering documents with a high likelihood of being relevant.
Finally, when using semantic search it has been demonstrated that synthetic responses—even when completely incorrect—are often more semantically similar to target blobs than the messages which generate them. So, you might consider using a technique like HyDE to increase your hit-rate.
All of these approaches involve some work and/or cost to implement, but all of them have been shown to dramatically improve the hit-rate of document searches.
If it’s really critical to you to get the absolutely best retrieval results, I’d just go all out and do them all.
Re-embed all of your chunks with two or three additional embedding models.
When you need to perform a retrieval, generate one or more hypothetical responses.
Compute embedding vectors for the message and all of the hypothetical responses using each of the embedding models. These will be your search vectors.
Perform keyword extraction from the message and responses.
Filter your documents by keyword.
For each of the embedding models compute the cosine similarity between each search vector and all of your embedded chunks and get the rank of each.
Merge these using mean reciprocal ranking.
Keep the top 50–100 retrievals.
Send the remaining candidates to a re-ranking model.
Keep the top n results from this final list based on whatever metric you decide on.
NOTE: This is almost certainly overkill and will run into diminishing returns fairly quickly. So, maybe try them individually and combine them as necessary to get the results you require.
Lastly, there is one other way to improve your retrievals and that is to use a fine-tuned embedding model. I have not yet personally fine-tuned an embedding model, but there are many resources online that can guide you through doing so.
There’s less evidence available for the efficacy of this, but from what I’ve seen it produces a dramatic improvement if your documents frequently use terms or acronyms that are specific to your domain. The idea being that through fine-tuning the embedding model learns the particular significance of these terms better than a general embedding model can, which helps it to better “understand” the semantic meaning in your documents.
I face the same challenge dealing with very large knowledge base datasets. Similar to you, what I sometimes end up with is a good answer, but not the most comprehensive answer. As mentioned earlier, the first thing you may want to look at is your embedding strategy.
I have an object property called “Questions”, which are questions that this particular chunk answers. This has proven to help raise the SNR (signal to noise) ration significantly.
As for the embeddings themselves, I use text-embedding-ada-002 which produces vectors with 1536 dimensions
I can’t speak to using multiple embedding models as I’ve not done this, but I use Weaviate as my vector store, and it has an embedding transformer text2vec-openai which has been working very well for me for several months now.
Now, all that said, sometimes with some 50 results, I still do not get the most comprehensive answers with some document sets (particularly the labor agreements and religious sermons). I have found that increasing the document retrieval limit much past 50 does not always yield the anticipated results. The model, gpt-4-turbo in this case, may have a much larger input context window, but the more text you put in it, the less efficient it is at understanding that text. See: Anthropic's best Claude 2.1 feature suffers the same fate as GPT-4 Turbo
But mines is even more simple: I run the cosine similarity search and return the top 50 - 100 retrievals as discussed earlier. But, instead of returning those to the primary LLM to evaluate, I use a cheaper model (with lower context window) to evaluate the results in batches of 10-15 and give me the most relevant matches. Once this LLM has evaluated all the results, I then send it’s results, along with the most relevant documents, to the main LLM. From here, I do one of two things: either return the concatenated results of the secondary LLM, or have the primary LLM evaluate the top relevant documents. Not sure which one works best yet.
This way of handling large knowledge base datasets has several rather commendable aspects, but there are areas where improvements can be made for greater efficiency and accuracies. While not garunteed, I believe by following these steps, you could achieve greater results.
Semantic Chunking: This is a great strategy for context relevance. Ensuring each chunk is meaningful within its context can significantly enhance the quality of information retrieved.
Metadata Utilization: Using metadata to trace the location and relevance of each chunk within a larger document is a smart approach. It adds a layer of organization that can make retrieval more precise.
Embedding Choice: Your choice of text-embedding-ada-002 for generating 1536-dimensional vectors captues textual features effectively.
Top-n Retrieval Limitation: Your current method of selecting the top-n documents can miss relevant information if it falls beyond the top-n range. This is a significant limitation, especially for larger datasets where relevant information might be ranked lower. Solution: Implement a dynamic thresholding system that adjusts the number of documents retrieved based on the dataset’s size and the complexity of the query. Machine learning techniques can be employed to predict the optimal number of documents to retrieve for each query.
Single Embedding Model: Relying solely on text-embedding-ada-002 might limit the diversity of your embeddings. Solution: Experiment with a hybrid embedding approach that combines different models as stated above. This can help capture a wider range of textual nuances and might improve the relevance of retrieved documents as long as there is a reference Agent watching the DB or warehouse results.
Limiting Document Retrieval Beyond 50: While increasing the document retrieval limit hasn’t always been effective, sticking rigidly to a number like 50 might be arbitrary and not optimal for all cases. Solution: Use adaptive retrieval limits based on the query’s complexity and the dataset’s characteristics. Machine learning models can help in deciding the optimal number of documents to retrieve, and using LLM reasoning like RAG fusion and advanced rag can help to solve this.
Efficiency of Primary LLM with Large Context: When the primary LLM processes a large volume of text, its efficiency decreases. Solution: Before passing large text volumes to the primary LLM, pre-process the data to condense or summarize information without losing context. Techniques like extractive summarization can be helpful here, or use MAMBA processing SSM6 for longer Seq2 to get better results in an unlimited fasion with less convolution.
Uncertainty in Final Evaluation Step: The dilemma between using concatenated results of the secondary LLM or having the primary LLM re-evaluate the documents indicates a lack of clarity in the final step of your process.Solution: Systematically test both methods across different types of queries and datasets. Analyze the results to determine which method consistently yields more accurate and comprehensive answers. While CoT and ToT are effective in this manner, Microsoft’s new AoT (algorithm of thought) will save energy and shots.
Generalizability Across Diverse Document Types: You mentioned challenges with specific types of documents like labor agreements and religious sermons. Solution: Develop specialized processing modules for document types that are consistently problematic. These modules could include tailored embeddings or specific pre-processing steps that address the unique characteristics of these documents. Due to filtering and belief based restictions on GPT3.5 16k and GPT 4 + or widely available GPT++ models that are corporately available, building your own system based on GPT or MAMBA setups can give you greater responses than those which are filtered.
look at your highly ranked documents and ask yourself how and why they don’t answer the query
This is how to do it! You have to actually look at a lot of requests and responses and figure out what’s going on, and then adopt an appropriate fix for each kind of failure.
FWIW, the way I do retrieval, is having one cosine similarity threshold (don’t include any document under 0.7, say) and then a ranked filling of the context. (I also have a fixed upper limit on the count, but that’s higher than what I can typically fit in the context.) I decide on a number of tokens I want to “spend,” and pick top-ranked retrieved documents in best-to-lowest order until the context is full. A smaller document with lower score MAY be included even if a higher-scored document isn’t included, if that document is too large to include. I use a simple greedy linear accumulation algorithm.
But, again: If you find that the “right” document is outside your retrieval window, and a lot of “wrong” documents are inside the window, then why is that? What if you test sub-segments of the documents and sub-segments of the query against each other? Can you compress the query (or the document) to filter out fluff that might dilute the embedding score?
Sure! I don’t know if it’s part of any particular packages, but for one product we noticed that it’s better to build a sort of tree.
you have your embedding indices, but they’re categorized by topics. each topic has a topic description that is itself embedded - so first you find the relevant topic(s), and then you look inside those topics to find the actual documents you need.
we tried using center of mass but only had mixed results from the start, so we generate topic summaries instead for this particular case.
so first you find the relevant topic(s), and then you look inside those topics to find the actual documents you need
Or, the opposite. You look at the document (or document chunks) and determine the relevant topics (or keywords or categories) from there. I did something similar with categorization of prompt questions and responses: Using the LLM to Categorize Responses
And, it’s just now dawning on me that I could do the same thing with my embeddings: create and assign global keywords/categories that could be used to filter down searches to the most appropriate chunks.
Yeah the basic point is use a semantic search to filter down to a reasonable set of results (re-ranking if sensible, eg prioritising “official staff content”), then only present that content to the LLM.
I even use cosine distance and a threshold to cut out results that are too dissimilar.
This will super charge its capability whilst reducing cost massively! It may also make it much much faster.
The engine is lacking as it has no real concept of “concepts” and topics/subtopic relevance, and this will never improve until the architecture is changed where you can force or teach importance on a level other than the number of times something appears.
Here is the answer from our custom GPT on the topic →
The ranking and selection of the topics related to regenerative development were not based on a specific hierarchical order or quantitative analysis. Instead, they were chosen and organized based on the principles and themes commonly emphasized in the field of regenerative development. These principles and themes are frequently discussed in literature, practice, and educational resources related to regenerative development and sustainability.
Could you please explain, what did you mean by “Re-embed all of your chunks” in first point. Do I need to re-embed the embeddings or embed the chunks using multiple embeddings separately and store in different collections?