Unexpected behavior when using function calling via tools in the Chat Completions API

Hi everyone! I’m attempting to parse chat text using the method described here, and it’s proving surprisingly difficult to get sensible output. Does anyone have experience doing something similar?

To be more specific, I’m building a chatbot and would like to be able to extract simple boolean values from user replies – questions of the type “did the user mention specific thing X?” To do this I am defining “tools” for each of my questions, and using the description fields to describe what the tools ought to do. However I’m not getting very good performance, with output often not matching the descriptions I’ve written. I’m a bit surprised, since this seems like a fairly simple parsing problem.

Have any of you tried to do something similar? Any advice on how to get it to work better? Is there a better overall approach? I haven’t tried to use the Assistants API yet since it’s just text parsing, but I suppose that’s an option. Any advice is appreciated. Thanks!

You don’t “make” tools with function calling.

You make the tools with code that can process the actions.

You then describe the tools you have programmed and which the AI can call upon in your API call.

Does your tool retrieve knowledge that can inform the AI? Does your tool trigger an action in an external service? Does it fulfill a user’s input in any way?

It seems what you might want to do is to prompt another AI to scan inputs or outputs. One might ask an AI before the ultimate one, “did the user mention or attempt to invoke racist, hateful content.”, for example.

Thank you for the reply! It helped me solve my problem.