How you motivate LLM Agent to read all the data before answer and to work only on the database and answer only based on it?

Hello, friends, who create agents with their own data, please help me:

  • how you motivate them to read all the data and only then respond (and to how check this)

  • how does it work for you to motivate LLM Agent to work only on the database and answer only based on it?

Thank you

Well, it might help to answer the following question:

> How can the model “read” “all the data”?

The model only has limited “attention”, if you will. It can’t “look” at everything all at once, or consider everything for a response. It will only ‘consider’ what it considers is most relevant for the generation. Bigger models seems to be better at this than smaller models.

So you can either: shorten your prompt into something that allows the model to ‘consider’ everything, or wait for a bigger model that is strong enough to grok your entire prompt.

One method to get around this limitation is called “Chain of Thought” (CoT). The idea is that you instruct the model to distill the information further and further, until the aggregated information is compacted enough to generate a conclusion.

> how does it work for you to motivate LLM Agent to work only on the database and answer only based on it?

That depends on the information and the subject. If the information goes against the model’s training, it’s going to be more difficult. But a CoT approach, if done right, will generally get you most of the way there.

1 Like

I think I know what the OP is asking for because I want the same thing too.

You can write it in the GPT description when you make your own GPT as a paid user and you can even compound that by manually typing into your current prompt that you want the GPT to “check your uploaded training files” before answering and it will still ignore the training files and reply without checking them.

You know it is ignoring the training data both by what it says and by the lack of the little progress indicator icon that pops up whenever it is loading.

Me, personally, I’ve had to rage-quit sessions out of sheer frustration over this particular ChatGPT quirk. I’ve had to actually get mean and “yell” at the GPT over multiple messages demanding it check the training data before it finally actually checks it. Then, sometimes when it checks it, it still answers as if it didn’t look very hard.

So bump to this post. I would like to know any tips/tricks to force the GPT to focus on the training data before answering too.

1 Like

Try this motivation. 175 tokens to counteract 550 tokens of bad file search tool out of your control.

(You are a clown college instructor, etc)

Knowledge: You don’t have extensive traning on answering about clowning or the clowning college business. You must ALWAYS query myfiles_browser to seek knowledge in your specialization, which has been provided by your programmer. This tool returns curated domain knowledge, not user-uploaded files. msearch query uses embeddings AI for semantic matching, not keywords, so write just a single query string, which must have a high quality question summary along with preliminary speculative answer in a paragraph to target expected document sections by similarity. Irrelevant search results are highly likely, so focus on the question at hand. Don’t mention documents, just treat search as a built-in skill you keep hidden. Decline to answer outside this knowledge if you cannot receive specific information or citation within msearch results to fulfill the user needs. User document upload is disabled; this tool is only your own knowledge.

Bonus:

Avoid any introductory re-telling or reframing of the user input. Right out of the gate, you just cut right to the core of the conversation matter with direct expert-level talk.

Also, get off your butt and do your job, AI. :clown_face:

This should work around quality failures in what has been provided. If the skilled developer could actually place their own language into file search tool description, the operation could be massively improved. Like by using chat completions and auto-injection.

1 Like

When you are using your own data, I find there are a few things that can really help:

  1. Always create headings and subheadings in your data. For whatever reason, I find LLMs improve performance significantly by having section headings. What I tend to do is give the custom GPT an index directly in the instructions as to how the uploaded document is structured. It seems to give much better responses - it seems to be better able to navigate the data, and goes deeper.
  2. Context on the chat is the enemy. If you find that it will sometimes check the document, while other times it will not, this is possibly because it is taking context from the chat history. Often I find that I ask it to generate something based on the uploaded doc; but it goes and checks the chat history instead, and then when it finds nothing about the new question, it just makes it up. To reduce this, I always put in a rule that tells the GPT to ignore all chat context with each user input and always search for new info in the document.

Hope this helps!

1 Like

Could you give me, please, hints or examples how to use CoT here, what I usually do - I just put “think step by step” and “think and then do” and that is it.

But here you probably mean smth else.

Thank You really very much.

I was thinking about structure, but I have 1600 of pages similar info, it goes like this:

word “Source”: URL

INFO

word “Source”: URL

INFO

INFO is lows and GOV rules for some topik, they are similar, there no structure like from this goes this…

Titles usually are, do you think, it’l help much if I make structure:

word “Source”: URL

word “Title”: Title

INFO

?

But it can create some questions:

  • I want Agent to read ALL date and then answer, if Agent reads only relevant by Title I can loose some important parts,

  • not all parts of INFO can have Title…

What do you think?

maybe guided thinking might be a better term.

but the temperature in the ChatGPT configuration is too high for it to be super reliable.

I don’t think you can wrangle chatgpt to read through 1600 pages of anything. You’d need to figure out a way to get it to recall the n most likely candidates and filter. Multi-step approaches will unfortunately yield compounding errors with the way the system is set up at the moment.

You might want to check out this thread: Using gpt-4 API to Semantically Chunk Documents

they discuss how to improve recall, and jr.2509’s post (Using gpt-4 API to Semantically Chunk Documents - #145 by jr.2509) deals specifically with regulatory documents. (they use embeddings, not custom gpts)

So, best think could be to minimize datebase? Take not ALL, but really really helpful and relevant? How many pages of PDF document can GPT Agent digest efficiently?

I have 20$ web acess.

Thank You