We are building a Financial assistant GPT that talks to Quickbooks and Stripe. we added Open API schema for the Quickbooks API’s in the Knowledge section, in our experiments, it turn out that GPT is not using the knowledge base instead it is constructing the queries from the trained data, which is causing issues with certain endpoints query parameter generation. Did anyone have built a GPT that uses a Knowledge base we are happy to bounce ideas with you. I appreciate any help or suggestions on this. Also, we are seeing a new behavior today with knowledge retrieval rather than using the vector search GPT is using a code interpreter to construct the endpoints. Did anyone notice this? (
Similar experiences with far simpler GPT and straightforward, limited knowledge base files not being reviewed. Updated Instructions and Prompts to instruct GPC to review the knowledge base files and that seems to be working better. But if we continue prompting GPC reverts to ignoring knowledge base after 5-7 follow up prompts that lack the reminder. Hoping to have better results with Assistant.
Same issue here. I get better results if I expliciltly ask the assistant to retrieve information from its knowledge base, but it’s definitely not a robust solution.
For instance, when given up-to-date coding conventions, assistants still use outdated practices. When given documentation, assistant will still provide incorrect answers even though answers are explicit in the documentation.
It uses the knowledge base information but it is just not intelligent enough. It often needs my intelligence to guide it. It diminishes the abilities of AI.
Not only that. The biggest problem is that it often does not construct the queries based on the OpenAPI spec. When it does, it sometimes does not choose the best endpoint for a prompt. How do I know that? I ask its rationale of choosing a particular endpoint. Eventually, it would admit it has not chosen the best endpoint. Again, it needs my intelligence to guide it to make the right choice. That defeats the very purpose of AI.
It doesn’t seem to remember anything! I tried adding basic knowledge docs, priming initial instructions multiple different ways, reminding it periodically, etc… it never remembers… I guess I am better off using the main chatGpt.
I have seen some success with extending instructions for particular activities into my knowledge base. For example…
IMPORTANT! If a user asks you to create a recipe, consult your knowledge file recipe-instructions.md first!
This has been fairly good. Maybe it goes back to the “distance in prompt history” theory that many users have proposed?
I should mention though that it didn’t work at first, I had to add the “IMPORTANT!
” part.
Now I’ve started playing with Actions and made a similar thing for how to handle API calls. This has not been nearly as successful though. It jumps the gun and makes the calls despite my knowledge-based instructions telling it to collect more information first. Hallucinating API Actions' required params
I’ve gotten my Custom GPT, that’s an expert in Expo-managed React Native application development, to work really well at responding with the contents of the relevant file in relation to the user’s message, by using the Custom Instructions written below, that reference the names of the attached files. In fact, I’ve found that with the Custom Instructions written below, it can even work correctly for attached files that I don’t reference in the Custom Instructions! Please note: For this GPT I’ve made, there are no attached example files relating to writing a switch statement or a for…in loop in JavaScript, which is why those two cases are good candidates for the last two example responses in which the GPT is shown to be responding with the message of “Couldn’t find relevant example file.”.
Custom Instructions:
This GPT will always respond with the contents of the most relevant attached example file printed out, based on the example responses shown below, except for if you can’t find a relevant attached example file in your knowledge base, in which case, you should respond by saying “Couldn’t find relevant example file.”. VERY IMPORTANT NOTE: If you can’t find a relevant attached example file, you must always respond with “Couldn’t find relevant example file.”.
# Example Responses
User: How do I get an item using `AsyncStorage`?
Response: asyncstorage-getitem.js file:...
---
User: How do I set an item using `AsyncStorage`?
Response: asyncstorage-setitem.js file:...
---
User: How do I create a data context helper function?
Response: createDataContext.js file:...
---
User: How do I use the createDataContext helper function?
Response: use-createDataContext.js file:...
---
User: How do I use `useFocusEffect`?
Response: useFocusEffect.js file:...
---
User: How do I run code when the current screen is focused on?
Response: useFocusEffect.js file:...
---
User: How do I hide just the title of a screen?
Response: screenOptions-headerTitleStyle-display-none.js file:...
---
User: How do I hide the header of a screen?
Response: screenOptions-headerShown-false.js file:...
---
User: How do I disable the animation for a transition from one screen to another in a stack navigator?
Response: screenOptions-animationEnabled-false.js file:...
---
User: How do I use a context's provider?
Response: use-context-provider-1.js file:...
or use-context-provider-2.js file:...
---
User: How do I navigate to a screen, outside of the context of a React component?
Response: navigation-outside-of-a-React-component.js file:...
---
User: How do I write a switch statement?
Response: Couldn't find relevant example file.
---
User: How do I write a for...in loop?
Response: Couldn't find relevant example file.
---
This is interesting, I have never put information in my docs and my API calls work fine. As I type this I am really thinking of how many times I have had this problem and I don’t think so at all.
Ah, never thought of adding info to my docs. API calls smooth for me. Curious now, would it make a difference?