Assistant going through a database sometimes hallucinates IDs

Hi! I have an assistant that is a chatbot taking orders from a pastries shop. It has one file attached with the list of products, descriptions, id, prices, etc. It looks something like this:

		"Name":"Carrot cake",				
		"Description":"homemade carrot cake"
		"Name":"Banana cake",				
		"Description":"made with very fresh bananas"
		"Name":"Strawberry cake",				
		"Description":"small cake that comes with toppings"

And I have a function (tool) that, when the assistant receives an order from the customer, e.g. “I want 2 carrot cakes and 1 strawberry cake”, the function returns the ID and Name of each product.

With this ID I then call my own backend to process the order with all the selected products.

It works well most of the times, however, sometimes the model hallucinates the IDs. Instead of, for example, returning:

"Name":"Banana cake"

It returns

"Name":"Banana cake"

a completely made up ID. So then my backend fails to find a product with that ID.

Have you had this issue? Also, do you have any workaround for not depending on the ID of items when parsing data that needs to match a database?

You have lots of options.

Some off the top of my head

  • Store the products as enums and then just have GPT send the product names instead of the ID. Your back-end should be able to fuzzy match and return hallucinations / products not on the menu

  • Send BOTH the ID and product name as a fallback

1 Like

Thanks for your answer :pray:. I was about to implement your second option and you just confirmed that I’m in the right direction.

To reduce hallucination, you can show examples of incorrect responses and why they are incorrect. If one or more of the reasons are “incorrect ID” then this should help. “Product name does not exist in the list” is another example you can include, just in case. Won’t completely fix it but it should help.

1 Like