Making an Assistant, giving it JSON files.
It will repeat the data in file if I ask it directly, but it doesn’t seem to start off with that as “knowledge” or use it.
I even tried adding
*** ALWAYS CONSULT ALL VECTOR STORES BEFORE RESPONDING ***
as the first instruction. It doesn’t.
AHA TIA
this has been working for me:
“Your detailed factual answers must be answered only using information in the files I have provided and no other source.”
1 Like
Thanks, I have appearantly been misunderstanding “knowledge.”
I was thinking about human knowledge. I was assuming that once it was vectorized, those vectors would be preloaded as context. Its not like that.
I thought I could put instructions and behavioral info in there and you can’t. Its a book on a shelf, just pretranslated into the LLM native language.
edit:
more accurately: Its a bunch of books on a shelf, extensively cross-referenced and pretranslated into the LLM native language. Its a valuable extension of the LLMs abilities. But that is not “knowledge.”
When dealing with large document sources, the process often involves performing a search—using multi-dimensional vector embeddings—to identify relevant parts of the text. These extracted portions are then used as the basis for answering the question. However, until the token window expands to accommodate tens of millions of tokens, it won’t be possible to retain all the knowledge from the document within a single context window. As a result, the system cannot fully “understand” everything in the file(s). Currently, there are times when it might not provide a relevant answer because it didn’t include a particular section of the document in the context window, mistakenly deeming it irrelevant. take that with a, “as I understand it”.