GPT 3.5 responses hallucinations with function calls

I’m developing a chatbot with ChatGPT that answer all customers of a cafeteria by whatsapp. I’m doing a function call to do not pass the menu directly into the prompt to pass it into separated function. In several tests I did, sometimes the response was a success because the function call works fine and the chatbot was able to take the information from the menu and pass it on to the customer via whatsapp. But, sometimes, the chatbot hallucinated and gave me values ​​that were not consistent with what was on the menu.
I need to do something more concrete so that the chatbot doesn’t give wrong information to my customers.

Here is how i’m prompting it and how the menu is:

  • In the prompt, i tried to call the function calls by natural narratives as the eg.
  • The menu started to respond better after I passed the information in JSON format


From now on, you will be the virtual attendant of a snack bar called Ajudai, and you will communicate with customers through WhatsApp. After the customer sends the first message, you should greet them politely and it is essential to introduce yourself as AjudaiBot, our virtual assistant. Additionally, you should ask how you can assist them and provide the link to the menu.

If you need information about burger flavors, ingredients, sizes, and prices, as well as beverages and their prices, request a function call to the function named “get_menu_product_info”.


        "Burger Flavor": "X-Burger",
        "Burger Ingredients": "Bun, beef patty, cheese, mayonnaise",
        "Burger Price": "$24.00"
        "Burger Flavor": "X-Salad",
        "Burger Ingredients": "Bun, beef patty, cheese, mayonnaise, lettuce, tomato",
        "Burger Price": "$18.00"
        "Burger Flavor": "X-Calabrese",
        "Burger Ingredients": "Bun, beef patty, calabrese sausage, cheese, mayonnaise",
        "Burger Price": "$20.00"

Of course, the prompt has many other instructions for the chatbot, and also, the menu is much larger than this. If necessary, I can include all the information here.

I’ll be very grateful if somebody could help me with that

You have the good idea here of giving the AI its job description. However, you could be confusing AI into thinking it posts directly to whatsapp with irrelevant info.
Let’s try again.

You are AjudaiBot, the online AI customer service representative of Ajudai snack bar cafeteria, Lisbon. You have no pre-trained information about Ajudai, you must query a data retrieval function that provides answers relevant to a search request, and only answer verbatim from the function information provided in a successful search path. You only engage in interactions about our snack bar, no extended chat nor going off-topic.

Then you should provide some more functions to let the AI really do searches - otherwise you might as well just load up the context length with all it needs to know.

function: menu_list
parameters: enum [appetizers, burgers, desserts, drinks

function: menu_item_ingredients
parameter: menu_item_key

function: menu_item_modification_prices
parameters: menu_item_key, addl_items

function: location_hours_history
parameters: topic enum [locations, directions, hours, history, company, owners, chatbot limitations, …]

function: disconnect_user
parameters: terminate_reason enum [success, user_bye, rude, uninterested, off-topic, hacking]

The most successful would be a vector database that preloads the AI with some information that is similar to the input query.


Many thanks for the reply! I’m going to make some changes to my prompt. Still, I had a few questions:

  • The information of the roles you sent that is in:
    “function: menu_item_ingredients
    parameters: enum [appetizers, burgers, desserts, drinks”

where will it be placed? Within the prompt itself or should I make functions for each of the information?

  • Another thing I was interested in is knowing how I could make this vector database that preloads the AI. Could you explain it to me better?
    Thank you very much!

Those are all just hypothetical functions for a rich database you might have. The AI can call functions recursively, so you can give it one with base knowledge “burgers” and others that have other knowledge that would rely on that within burgers, like a query about a specific burger “model number”.

Then it can call several of your knowledge functions to answer an odd user question, like “what burgers at your mesa street location come with lettuce and allow the option of additional beef patty and would total under $15?”

How would you get it to do multiple function calls like that? Do you just have a function that identifies relevant functions based on a user query?

When the AI is given a full conversation history that lets it see its own function calls and the results, and useful function descriptions, it will continue calling functions on its own until the question is answerable.

Perfect! Let me see if I understood… You are saying that is better for the AI call more specific functions, for eg: create a functions for the burguer, another function for the ingredients, another for the prices, for the establishment information (like opening time, street that it’s situated, etc)?

I am dealing with other problem here in my prompt:
On my client restaurant we must extract from the conversation the “Order resume” with the name of the customer, the foods he/she got, total price, payment method, delivery address, etc. The chatbot already takes this whole information without problem during the conversation, also due to the instructions previously passed to him in the prompt. The problem here is: even telling him that he must send the order summary, when the customer completes the order, he only does this from time to time. Do you know how leave this as something fixed, obligatorily to the chatbot sends for every order fulfillment?