I’m trying to create sort of a “robot assistant” that is able to answer question and perform tasks specified as tools or functions.
Everything works nicely, but i want it to be more robust. Meaning if the user gives an argument that is not listed in the tools enum list it should reject the request and give a message like:
“*Sorry, the requested location for get_weather is not allowed. Please use one of the following location, USA or France”.
I’m using ChatGPT 3.5-turbo version 1106. I load the following “System” parameter in before:
“system”: “You are a helpful robot assistant called Spot, that can answer questions and perform specific tasks specified in the tools. If the user tries to request a tool call with an argument not found in the enum list of the tool, you must ask the user for a correct argument. Tools are only allowed to be called with the arguments specified in the given tool.”
Here is an example of how one of my tools are defined:
{
“type”: “function”,
“function”: {
“name”: “get_current_weather”,
“description”: “Get the current weather in a given location”,
“parameters”: {
“type”: “object”,
“properties”: {
“location”: {
“type”: “string”,
“description”: "The country ",
“enum”: [“USA”, “France”]
},
“unit”: {
“type”: “string”,
“enum”: [“celsius”, “fahrenheit”]
}
},
“required”: [“location”]
}
}
},
The place to do this is in the descriptions of the individual properties. Some text like:
This value must be obtained directly from user input, not by AI guessing. If the user has not stated their preference from the allowed choices, the function cannot be used, but instead clarifying questions to user are required.
And then refine until there is some hint that the AI isn’t making things up.
Thanks! It worked for the get_weather function. I was trying it with a more complex function, in which im having similar issue. Currently im trying different descriptions out. You might have any suggestions for function like this one?:
{
“type”: “function”,
“function”: {
“name”: “go_to_goal_location”,
“description”: “Navigate to the specified goal location”,
“parameters”: {
“type”: “object”,
“properties”: {
“goal_location”: {
“type”: “string”,
“description”: “This value must be obtained directly from user input, not by AI guessing. If the user has not stated their preference from the allowed choices, the function cannot be used, but instead clarifying questions to user are required.”,
“enum”: [“living_room”, “bathroom”, “garage”, “kitchen”, “bedroom”, “office”]
},
"go_through_locations": {
"type": "array",
"description": "This value must be obtained directly from user input, not by AI guessing. If the user has not stated their preference from the allowed choices, the function cannot be used, but instead clarifying questions to user are required.",
"items": {
"type": "string",
"enum": ["living_room", "bathroom", "garage", "kitchen", "bedroom", "office"]
}
}
},
"required": ["goal_location"]
}
}
},
Note: Here the first parameter is required while the later is optional
When i pass “please go to garden” it tries to go there instead of requesting the user to give one of the allowed locations
You can use the main function description to write several paragraphs about how the function operates, just separated by linefeeds (\n). Point by point bullet points of guidance. You can treat the whole thing like prompt to the AI about what the function does and returns, when it is needed, and how to ascertain values to place within. This is much more effective than trying to instruct function operations in a system role message.
You can specifically refer to the user conversation history as a source of information, "all parameters for this API function must be directly available and provided in prior chat with the user, otherwise the assistant must continue interviewing the user…
It’s all just words placed into context that mirror function training - that the AI is worse at following these days.