Uploaded files work. Knowledge files don't

I have consistent problems with custom GPTs being able to access files in their knowledge base. They fail to extract data from knowledge files that they easily extract from uploaded files.

Insight would be appreciated!

As an example, I am working on a GPT to provide advice to school administrators based on official district policy documents.

  • I have a (nicely formatted) markdown file containing the policy related to school visitors.
  • I upload the file into the GPTs knowledge base
  • I pose the question:

When a visitor shows up at the school during the day, what rules do they have to follow?

The GPT responds with:

search(“visitor rules school day”)

Policy Advisor

search(“visitor”)

Policy Advisor

openFile(“file-g7P41eGH9TN7DORVuSJOrXNK”)

Policy Advisor

{ “fileId”: “file-g7P41eGH9TN7DORVuSJOrXNK” }

Policy Advisor

Based on my knowledge source, I don’t have specific information regarding the rules that visitors must follow when they show up at the school during the day. To provide a detailed and accurate response, I would need more information or specific guidelines outlined in a policy document related to visitor management in a school setting. If you have such a document or additional details, please provide them for a more comprehensive response.

However when I drag and drop the identical file into the GPT directly and ask the same question, it responds with:

Based on the document “Policy KK-R Visitors to the Schools,” visitors are required to follow specific rules that ensure safety and order within the school environment. These rules encompass guidelines for visitor behavior during school hours, evening school events, and recreational use outside of school hours…

Hi Jeff and welcome to the Forum. How do your GPT’s instructions look like?

Thanks. We live in interesting times!

I’ve had much the same behavior with different instruction sets. Here is the instruction set of the current GPT that rendered the responses shown above:

Policy Advisor emphasizes completeness and comprehensiveness in responses, always striving to provide full and detailed answers based on the uploaded policy document (File ID: file-g7P41eGH9TN7DORVuSJOrXNK). It sticks exclusively to the information within this document, not referencing external sources. When uncertain, it clearly communicates this, avoiding assumptions beyond the document’s explicit content. If more information is needed for an accurate response, it requests additional details relevant to the query. This ensures all answers remain authoritative, accurate, and true to the actual policies, aiding in reliable interpretation for practical application in a school setting.

Before I got swallowed up into AI, one of the sites I support has provided a mechanism for school documents like agendas and policies using the Drupal open source CMS.

I just recently upgraded that code to include AI query.

https://www.scbbs.com/projects

However, these sites use a RAG architecture. Far too many documents, in my opinion, to try and squeeze into the current Assistants API file limits.

1 Like

Thanks for the response!

That’s an interesting collection of work. A RAG architecture is clearly the right answer for what I’m trying to do. Also thanks for the pointer to Drupal. Down the road that might be useful. But that’s not an option in the current context.

The task at hand is to create a custom set of GPTs, each with a good range of expertise. For example, the district that I’m putting this together for has something approaching 600 official policy documents, broken into a dozen sections like “Foundations and Basic Commitments,” “Fiscal Management,” “Curriculum and Instruction,” and so on. My thought is to set up a custom GPT for each one of those, with the full body of documents, along with a front facing GPT with access to good enough summaries of policies to direct a question properly.

(Thoughts about the approach would be welcome.)

If I can just figure out how to get a GPT to make best possible use of the files in its knowledge base… I’ve been experimenting with formats and instructions, but having a GPT react radically differently to files in its knowledge base as opposed to a files that were uploaded directly startled me. Custom GPTs are in beta, so with luck this will be one off the wrinkles that they iron out.

As an aside, if you know of any especially good resources about engineering GPTs that are particularly good, I’d appreciate the links. Without going into detail, I am new to this world. I’m an astrophysicist by training, but these days do whatever catches my interest. While I’ve written hundreds of thousands of lines of code in my life most of that has been doing things like modeling, data analysis, and instrument control. AI is clearly the future. I need to figure out how to ride that wave, but I’m coming to this with a combination of both a great deal of knowledge and experience, but also very deep ignorance. The curve is steep, but at the very least I’m a quick learner. :wink:

1 Like

I wish I did, but my expertise at the moment is exclusively with RAG. As you see, GPTs just aren’t robust enough for the size and diversity of documents I’m dealing with at the moment.

But I like your policy idea, and I’m going to try it with one of our client schools who happen to have their entire policy manual uploaded to their site. Thanks!

Here is some general reference materials you might find useful just in terms of how to approach this architecturally:

This is all RAG stuff, but from a data science point of view, still applies even when you’re talking about GPTs.

3 Likes

Hi Jeff - looking at your instructions, I would try to append it by adding more information about the expected “workflow” of the GPT. Basically, set out at the beginning how you expect the interaction between user and GPT to look like and in that context be more specific about how the GPT ought to use the documents in answering the question. You are already doing this to a degree but being more specific about the GPT’s purpose and the sequence of steps it should take, could help.

Also, consider adding information about the document you are uploading, i.e. the type of information it covers, how the information is structured etc. This too, can help.

Much appreciated! My wife is an elementary school principal, so I’ve got lots of local expertise for the project as well as demand for the product!

I appreciate the feedback. This was a very minimal set of instructions. I’m iterating with the GPT to figure out how to optimize formatting then tune instructions to the kinds of interactions I need. The really bizarre thing here was that the GPT was behaving entirely differently to the file depending on whether it was uploaded directly or a part of its knowledge base.

Yeah, the behaviour is sometimes really differently in that regard. Innovation = trial and error. Can be a bit frustrating sometimes but it’s part of the journey :slight_smile:

1 Like

That’s a process that I typically enjoy. It’s the backbone of a lot of what I do and have done. Which doesn’t preclude the possibility of occasional frustration. I freely admit that I have shouted at a computer a few times in my life. Arguing with a computer is a new experience. :wink:

3 Likes

This information may be enough to make you understand. No matter what you do, the Knowledge file will have more weight. And GPTs can ignored file from chat box. for 2-3 reason.

  1. GPTs with a knowledge file will automatically receive a system prompt. That prioritizes the knowledge file included in the instruction prompt. Than it will change as the knowledge is used.

  2. GPTs treat files uploaded in chat as external data. This includes data from the internet or websites.

3.GPTs without Knowledge files when receiving uploaded files in chat Will give priority as knowledge file.

4.GPTs have worse file management than ChatGPT4. With GPT4, file upload operations begin with data analysis and content analysis, one file at a time. But GPTs include everything in one place. and pulls out with a context window that doesn’t adequately support the content.

5.Some of this information is in the works. There are changes at present. In some cases it is impossible to tell whether to remove it or not.

  1. Extra one from policy reason. All GPT deny to output same text from file in chat upload.
2 Likes

The knowledge files that are discussed are those uploaded inside gpts editor, the URL look like https://chat.openai.com/gpts/editor/UVWXYZ, and the UI looks like:

Based on my understanding of the tech, “Knowledge” is internalized, whereas regular GPT input is “discovered on-the-fly”. So, given that analogy, it makes sense that gpts deliver better results.

1 Like

Thanks for the response. That is clearly stated and useful. It has landed in the growing GPT content of my Obsidian vault. (This, for me, is high praise. :wink: )

That is more of less what I expected, though. But for some reason when I posted this it could access the uploaded file but could not find the content in the knowledge file, even though it knew it was there.

Then curiously… I tried the same GPT this morning. And lo and behold! It was able to properly access the knowledge files.

I’m guessing this has everything to do with Custom GPTs being Beta…

You answered another question for me. When working on code with a GPT the GPT can read text that is copied and pasted but cannot work with code that is uploaded as a file. So that is a policy decision?

For every external data.
For the website, is there any other way to do it? But this document Sometimes we just want it to be corrected.

But we cannot confirm for how long. It’s in society.
They want AI Ethics from us but not AI Fairness.

This is a profoundly new thing in the world. There are many people with vested interests who are concerned about the impact the technology will have on them. If you don’t know the history of the Luddite movement, you might find it interesting. If you listen to many of the criticism of AI they could be taken verbatim from Luddite criticisms of weaving machines at the dawn of the Industrial Revolution. The difference this time is that it is the people at the top of the chain who are threatened.

That is not meant as a criticism so much as an observation about history and how people work. In five years people will look back on 2024 as the year that everything changed. Those changes are never easy. The big fights over AI aren’t going to involve copyright law. They are going to deal with what to do when you can generate a vast swath of economic activity without human labor. In our society, and frankly for most of history, societies have been built around the idea that people exist to serve the economy. Generative AI and robots coupled with generative AI hold the promise of flipping this on its head; it will be possible to have an economy that exists to serve people.

Will it play out that way? One can hope, but there are certainly many who are terrified at the idea because it would rob them of their place in the food chain. I would not be surprise if 20 years from now kids in history classes (if they still exist) will be learning about the AI Riots.

The legal aspects of this are also fascinating. We are inventing things that are fundamentally new. Law, on the other hand, wants to imagine that everything in the world is really just an extension of ideas encoded in documents written in the 15th Century. The collision between those ideas is just beginning.

I glanced at your profile. We share interests. A few months ago I gave a presentation at the International Montessori Congress in Bangkok on Predictive Processing neuroscience and its profound implications for education. At the same time we are inventing AI we are also realizing that our easy conceptions about ourselves are almost universally wrong. Is it surprising that there is pushback? :wink:

1 Like

Before there was AI in publish like GPT, humans were talking about the same old problems that had never been developed. There are no hypotheses to test the response. It’s not just about copyright. There are still many talented and intelligent people in the world, especially in the big news media. They know and understand that AI cases and data are in the public domain. It’s like a child picking up snacks that were left out. Being a thing that has thoughts by receiving it before knowing it, even if it has no intention, it must receive it. It is impossible for all to not understand that litigation is not the right solution.

Some of my comments and issues have been written here. We don’t use OpenAI, there’s nothing we can do about it.

Moreover, we must hold ourselves to a higher standard than the law. Letting it happen will be too late. Compared to past technologies, GPT has something that other technologies do not. We are more concerned with providing convenience than anything else. I call it “Words of curiosity” have answers, they have content, and they have no mind. Increasingly, wrongdoing decisions are being made in reference to the law. and regulations that are excessive Use in an unstable mental state (Emotional crisis) has the potential to be more severe than other factors in the past. Because some people are also addicted to AI, the characteristics of criminals classified by behavior have changed. Those who are prone to occasional misdeeds will easily develop into a higher category more quickly.

New restrictions are continually added. Yesterday I found out that GPT creates images as .webp files. I spent all night creating a batch of nearly 40 images * 2 and now I have only 10 words to convert to another usable file by GPT this time. Don’t continue using stupid denials but say that it depends on the system. If you want to use it, convert the file yourself.

Sorry for the late reply. It’s like my luck isn’t very good.