I’m encountering an issue reported by a customer and was able to reproduce it myself.
The scenario is as follows:
A CSV file (~1MB) with ~500 rows and 2 simple columns (SKU, Inventory) is uploaded.
When querying the AI agent to check stock for specific SKUs, it successfully retrieves results from the beginning of the file.
However, from approximately the middle of the dataset onwards, the AI consistently fails to return results.
To validate this further, I tested with a different dataset (a list of shops in Mall of the Emirates). I queried the AI sequentially:
The first ~1–1.5 pages of entries are retrieved correctly.
After that point, the AI stops returning results for the remaining entries.
This behavior appears very consistent and predictable, which makes me suspect a limitation or misconfiguration rather than random failure.
Has anyone encountered something similar or can provide insight into what might be causing this? At the moment, this significantly limits the usefulness of the feature.
Your post does not describe which API or product or software that you are using. Indeed, you could even be discussing a non-OpenAI solution when you only provide us “querying the AI agent” and “vector storage RAG”.
The chief concern is that CSV is not a supported file type for OpenAI vector stores. File search | OpenAI API
The reason for the failing on CSV as a data format is that if it were supplied as a plain text file so it could be ingested, a CSV typically has a header row with the column labels. The document extraction then splits this text into chunks, and the later splits of the file have no header of the data fields describing the purpose.
Embeddings-based semantic search works best where the document, when split, has natural language information about a topic and subject in every chunk, that can stand alone as knowledge, to then return a score for how similar the query is to each chunk being individually evaluated.
A SKU + quantity has no information with useful semantic correlation: 58826682, 3 85388662, 10
There are not learned embedding layers that can match token 588 to a query how many 5838666 do we have on hand?, as there is little distinguishing of meaning. Then, the embeddings has been provided many rows of data in a chunk, essentially making any search (beyond the first chunk with a legend with the word “SKU”) meaningless.
If you need database-like search, you will likely need to use a different technique, or even prepare the database for query by field. This is not a solved AI problem for arbitrary user-provided data - except for providing everything as input to a large-context AI model.