Has anyone been able to successfully use both the “retrieval” and the “function” tools in the same Assistant? if so, can you please provide information and even share your Assistant JSON?
I am trying to write a customer service bot, using the Assistants API, gpt-3.5-turbo1106 model. The bot is expected to take questions from customers, and browse through a knowledgebase articles to find the answers. The bot is also asked to perform actions using several functions.
For example, the customer may ask about product ABC, then get information from the knowledgebase, and then as the bot to “buy me three of product ABC” and that’s when we want to call the buyProduct function and pass the quantity and product.
I started by arranging the knowledgebase articles into a JSON file, removed non utf-8 chars and loaded it into the Assistant as a File. With that I was able to have the Assistant answer questions by using the information from the articles.
Next, I tried adding some functions (even tried the sample code get_stock_price from the Assistants playground). Once you add a function, “retrieval” seems to stop working. Every question you ask returns a “Run failed
Sorry, something went wrong.” message.
I can say as much as that I have used both retrieval (five different files) and three different function calls in my assistant and it works. I will however caveat that the files are not a traditional knowledge base and rather provide contextual information that the assistant needs to execute tasks. My actual knowledge base is dynamic and external and I use function calls to access and retrieve information from there.
Due to confidentiality I can’t share the actual JSON file but as per above, in principle the combination of both capabilities can work.
I hope you can find a solution to make it work on your end.
Sounds like an impressive setup there, some controls on the extent of the internally retrieved information would be the icing on the cake, hopefully a coming soon feature for the assistants API.
Thank you @jr.2509 , do you use the 3.5 model or the 4?
the files you provided, are they txt or what format?
I have gotten an assistant working with both retrieval and functions, but I added both to it in the playground interface.
After adding functions, when you inspect the assistant in the playground does it still list the files that were there before?
If you don’t get the client.beta.assistants.update
call exactly right it can wipe the other tools from the assistant.
Hi @bookstaber , I am using the rest API directly. It does lists the files in the playground.
Can someone post a complete example of this working? including the model, the instructions, the API functions and the content of the file(s)?
I have been tinkering with it all day with very little consistency and quality.
I am using 4 (GPT-4 turbo). For now I am using docx files (as my content is in tables) but am looking to switch to csv. I am very specific in my instructions as to when to use different tools and files.
You might also want to refine and specify your function description. This can have an impact on the quality of performance.
very interesting thread. Hope someone know the answer to my question:
I’ve built a travel assistant, using the Assistant API, gpt-3.5-turbo1106 model. The bot is expected to take questions from customers, and browse through three knowledgebase JSON files to find the answers, like the Assistant of this thread.
The total numbers of chars of the JSON files is about 28k chars (2 files of the 3 have 7k chars, the other 14k), the knowledge base is not huge.
unfortunately the Assistant has hallucinations for several topics and provides answers completely not related to the knowledge base and it says that it didn’t find anything in the knowledge base.
This is an example of what I mean:
The user asks if there are food tours in the city and in the related JSON file there is this specific snippet:
“Food Tours”: [
{
“name”: “Kensington Market Taste the World”,
“description”: “Explore the culinary diversity with a guided food tour through Kensington Market.”,
“info”: “Kensington Market 'Taste the World' Food Tour - Toronto Food Tours & Chocolate Tours | Tasty Tours Toronto”
},
{
“name”: “St. Lawrence Market & Old Toronto Food Tour”,
“description”: “Discover Toronto’s culinary history and sample local specialties at the city’s iconic market and surrounding historic area.”,
“info”: “ST. LAWRENCE MARKET FOOD TOUR | Culinary Adventure Co”
}
],
But the Assistant never provides the answer contained in the knowledge base, it always responds with info of the OpenAI knowledge or telling that it didn’t find anything related in the knowledge base.
How can I solve this issue? I can’t believe I’m not able to build a customer support Assistant that works properly with a normal size knowledge base
I was thinking to add some code related to “keyword identification”, so for example when in the user’s query there is the keyword “food tours”, the bot looks for the answer in the related JSON knowledge base, is it a good solution in your opinion? I already added in the instruction this concept but it doesn’t work
Hope someone can help me
following… running into this exact same problem. Not able to chain together a retrieval + function call, although I’m sure it is an oversight on my end.
Will share my solution in node.js when I come to it, what language are you using python or node? Apologies if you already stated in your opening post
Be as specific as you can in your instructions that the assistant should exclusively draw on the knowledge in the files you uploaded. Consider adding the filenames as it might be available to the assistant as meta data. Negation can also sometimes help, i.e. never do XYZ when answering a question.
If the problem persists, try following an iterative approach and start building the assistant with one knowledge base until it reliably responds, then replicate the approach across more files. Describing the structure of your files in the instructions may also help.
Your keyword idea could work provided you use the keywords as well in your knowledge base. You could then instruct the assistant to first classify the user query into one of your defined keywords and then executive the search in the knowledge base based on that.
For those who asked for more information, here it is:
I created a very simple example, hoping that I can expand from there. As you can see below, it showed the same instability and issues. I would really like to know if anyone has the formula for making this work, and if so - how?!
1. I created a txt file, with the following content:
“The information below describes the house that the user is currently looking to buy:
The house size is 2000 square feet.
The house address is 5453 Ellenvale Ave. Wooodland Hills, CA 91367
The house exterior color is green.
There are 3 bedrooms and 2 bathrooms in the house.”
2. I created an Assistant via the playground, using the gpt-3.5-1106 model.
I included the following instructions: “You are a helpful real estate agent, answering questions about a house that the user is looking to buy.”
The Assistant tools include “retrieval” with the file above, and the “function” below -
{
“name”: “get_price”,
“parameters”: {
“type”: “object”,
“properties”: {
“zipcode”: {
“type”: “string”,
“description”: “The zip code of the house”
}
},
“required”: [“zipcode”]
},
“description”: “Get the current price of the house”
}
3 Then, I used the playground, here are the results.
a. Asking for the number of bedrooms in the house, results in invocation of the get_price API call, despite the fact that the data is in the attached file.
b. asking again, sometimes results in generic error:
c. In some cases, there are, just errors getting issues accessing the file
d. And last - not able to understand the information of the file…
I just posted a complete Python app that implements both retrieval and functions here: GitHub - dbookstaber/OpenAI_Assist_All_Tools: Simple demo of OpenAI Assistant with all tool types
The only thing that has been unreliable while I’ve played with it is the retrieval of CSV contents. I didn’t get that to work until I uploaded the CSV separately through playground, and even then it took a few queries.
Have you managed to solve the problem? The same thing is happening to me.
I’m running into a similar problem, and I’d be interested to hear how far people have got.
The easiest way to describe my use-case is as follows:
The assistant has access to some files, and the knowledge retrieval tool is enabled.
The assistant also has function calling, in order to return a response in a JSON / structured format.
Imagine that the input text is a support email. The assistant should look within the attached files, and then provide knowledge based answers.
The assistant should respond with say 3 “possible replies” in a structured JSON format, so that it can be appropriately and reliably displayed on the front-end.
At the moment I’m not sure if it’s possible to do this 2x tool use within a single “run”.
One possible solution is to do multiple steps - Get the knowledge needed, and then form some replies using the function calling.
Any help or input would be greatly appreciated
No solution yet, I am still tinkering around trying to find consistency.
You could try setting file_ids array in the Request body when create message.
https://platform.openai.com/docs/api-reference/messages/createMessage
how will that address what I am asking about?
Has anyone found a way to trigger for retrieval and functions to work together?
Note that in the dashboard ‘retrieval’ will switch off automatically when you switch to a model that doesn’t support and will not auto switch back if you put it back to the old model?
I’ve been experimenting with the OpenAI Assistants API, specifically with retrieval, code_interpreter, and function_calling. I’ve shared my projects on my blog, and I’ve discovered some strategies that have helped me optimize the API’s performance:
My projects are shared here ( hope it’s helpful) : https://arunprakash.ai
-
Reduce message counts in the thread: High message counts can overload the model with tokens, potentially causing it to miss important information. Consider summarizing or truncating older messages to manage the context size.
-
Utilize multiple assistants: Depending on your use case, you might benefit from using multiple assistants in the same thread. This can allow for specialized responses based on the context of the conversation.
-
Leverage the
additional_instructions
parameter: This parameter can be used to append instructions to the existing ones in a thread, rather than replacing them. This can be particularly useful for dynamic instructions. -
Incorporate files in the messages: If your use case allows, adding files to the messages can enhance the assistant’s understanding and response accuracy.
-
Invest time in crafting instructions: The quality of the instructions and additional instructions can significantly impact the performance of the assistant. It’s worth investing time in crafting clear, precise instructions.
Happy coding!