I’m using Azure OpenAI for implementing a conversational AI using GPT-4 to act as an assistant for resolving user queries. The assistant is supposed to follow a predefined set of instructions, including using provided functions for specific tasks.
The following is a trimmed down version of the prompt that I am giving to GPT-4.
You are an AI assistant for xyz company. Follow the instructions to resolve user queries.
- Ask for name and address.
- Ask for date of birth.
- Ask the user if they want OptionA or OptionB by calling the provided function prompt_user_to_choose_an_option.
- Do a search with the chosen option.
I have defined the following function to prompt the user with the options.
{
"name": "prompt_user_to_choose_an_option",
"description": "Display the options for the user to choose from (OptionA or OptionB)",
"parameters": {
"properties": {
"options": {
"type": "array",
"items": {
"type": "string"
},
"description": "The list of available options for the user to choose from."
}
},
"required": [
"options"
],
"type": "object"
}
}
But, in step 3, instead of returning the function call, GPT goes ahead and asks the user with a plain text prompt “Do you want OptionA or OptionB?”, without calling the given function. If the user responds with say, “OptionB”, it then returns the function call which was supposed to be returned in the previous step.
Why does this happen? Why doesn’t GPT return the function call and instead directly prompts the user with a plain text prompt? I want this prompt to happen through the function call because I need to display a specific UI element for the user at this step.
I have tried adding instructions asking GPT to never use a text-based prompt when asking the user for options and use the function instead. That works sometimes but is not consistent. Is there a proper, consistent solution to this problem?