Has anyone found some prompts that reliably encourage the Assistant to use knowledge retrieval in the presence of code-interpreter and additional functions being available?
I add tons of encouragement in both the assistant initial prompt and attached to the first message, and cannot get it to use KR with any reasonable probability. I also fairly frequently get errors from the runs.
Turning off the other tools in the first run does make this happen, but then it becomes confused/upset when trying to execute other functions in subsequent steps based on what the KR says.
Hoping this is just a beta code limitation and will improve.
I should also add, if I cancel the run it starts and AGAIN say to use knowledge retrieval it often does. It’s very weird.
Also with GPT3.5 I pretty much can never get it to use KR. That’s quite the bummer since GPT4 seems prohibitively expensive for multistep runs. (I’ve seen > 20$ for a single question)
What i do is have another assistant turn on or turn off functions in the appropriate run for the target assistant. So that way the target assistant is not confused.
Hi peabody - welcome to the Forum. It sounds like you are adding quite a bit of detail to the actual prompt in the thread. How about the Assistant’s instructions? Have you included the same details there? As a general guidance it helps to be as specific as you can in your instructions including specifying the conditions under which the knowledge should be accessed and under which certain tools should be used. Never mind if this is something you have already addressed.
@jr.2509:
Yeah my assistant instructions end with: "# Knowledge Retrieval
When give a complex task, I would also recommend checking if there are any documents you have seen that give suggestions how you can complete this. PLEASE DO THIS. It is very important for you to be successfully able to help people. ALWAYS start all threads by checking for relevant documents that could guide you, which often includes prior examples.
REMEMBER RETRIEVE KNOWLEDGE AND REVIEW IT BEFORE CALLING FUNCTIONS OR WRITING CODE.
Again, I cannot emphasize this enough. The first thing you should ALWAYS do is a knowledge retrieval operation based on the question/request you get. You should do this with the the retrieval tool as your first step to help with this."
and my first message the user prompts I also append: “\nThe very first step is to use knowledge retrieval (the retrieval function call) to look for previous similar requests in your knowledge base. If you find something, please summarize for me.”
despite these prompts, it’ s still poor odds that GPT4 will do it and zero if GPT3.5 will … unless I cancel the run and say “Stop. Please use retrieval first.” which works relatively reliably…
@icdev2dev what do you mean by another assistant? you are running two parallel assistants with a meta-one deciding what tools should be available? Turning it off code-interpreter/functions for the first run is the only thing that seems to be working for me but does seem to confuse it a bit.
Excellence Guru’s primary function is to provide in-depth, detailed, specific guidance and information on company processes using the core file named ‘redacted’ as its sole information source. In cases where the query extends beyond the scope of the document, the GPT is to acknowledge the limitation and direct the user to consult with a member of the Center of Excellence team for further assistance. The GPT’s interactions are grounded in the knowledge contained within this core file, ensuring accuracy and relevance. It will avoid speculation and will not use any information outside of this document. If a search within the document yields no answer, the GPT will state so transparently
This is what I have so far for one that knows my company’s processes, obviously coding is a bit different since you want it to use it’s outside knowledge as well.
How long are the documents you gave it and how many of them are there?
@trenton.dambrowitz Right now I have several internal knowledge-base articles suggesting how to solve various requests. I haven’t tried concatenating them into one long document. However, even when I only include a single document it is still disinclined to use it.
Do you think the length/number will influence the likelihood of it trying to use retrieval?
Length/number will certainly make it more difficult, it’s impractical to expect it to fully read/consult every document each time. You either need a very clear method of determining which file to retrieve for what information/problem, or you need a structured and useful document that it can use as a knowledge base for all prompts.
Keep in mind that these things aren’t magic, the general rule of thumb is that if a human not familiar with your specific use-case couldn’t do it with the context you give then the AI model won’t be able to either.
Further to what Trenton is suggesting, I would also try to simplify. In the instructions, describe the interaction flow that you expect to see between the user and the assistant. As part of that you can embed the expectation that the assistant should always retrieve knowledge from the available files first. I would include some details regarding the type of information/knowledge found in these files as well as how they are structured (esp. if complex).
Really try to break down the steps the assistant should follow one by one as well as the conditions for using other tools.
@trenton.dambrowitz That’s why I tested the situation with a single instruction file matching the use case. When it did pull in that knowledge, GPT4 solved it great. It is frustrating how reluctant it is to use knowledge retrieval when the other tools are enabled.
@jr.2509 good point. I’ll add some simplified instructions to my unit tests to better quantify this behavior and make it more easily reproducible.
Right now this is integrated into a relatively sophisticated data retrieval and analysis system. It’s almost working in the sense that I can talk it through accessing the relevant data and analyzing it, but without KR cannot have it solve more end-user problems with the type of questions they might ask.
I may also have to roll a different solution with an explicit RAG step at the beginning to work around this until the API is a bit more controllable.
It also seems relevant that people are saying custom GPTs are more reliable than Assistant API using the same document prompts Custom gpt vs assistant api - #17 by davejamesnewman which seems to align with the behavior I’m seeing.
Just to add here, I also had a hard time getting gpt-3.5 to perform knowledge retrieval, but if I mentioned the actual file_id in the prompt, then it always used it.
After many rounds of testing i’ve yet to have consistent results in knowledge retrieval.
Instructions including explicitly asking to use the ‘knowledge base’ or mentioning specific file names, marking !IMPORTANT or # knowledge retrieval would all end up being successful at about the same rate. ~40-60%
What did work consistently was explicitly asking about knowledge outside of the instruction and in the message itself. Not a viable option, but does work 90% of the time.
Hoping this is a beta flaw and explicit knowledge retrieval will be made available in the future.
I found the same issue. In the instruction I actually provide a list of topics (NAME, summary). I ask it to take the request, find the relevant topiccs, and retrieve the instructions. Refuses to do this. In the message when I see “read the relevant topics for " … request …” and then execute it, it works fine.
This is kind of ugly so I am thinking to use a separate thread that is only used to select the topics, I can then include the topic instructions in the main instructions.
Kind of dumb for an AI that is supposed to be smart?