Giving Tools to the LLM vs Guiding the LLM and Using Tools

In open ended use cases like chatbots and question answering, how are you comparing giving the LLM a set of tools and letting it operate freely vs incorporating structure into the LLM and forcing it to use specific tools?

i.e. for a recipe chatbot I could have it determine the cuisine the user wants then have a different function for each cuisine to generate the recipe. OR I could provide a tool set for all the cuisines with a single LLM endpt and allow it to choose what to do itself.

Depending on the use case it seems like you’d want to have more guard rails to help the bot succeed but the flip side of guiding the LLM is that you could end up in a tree like scenario where you are defining the paths too strictly and then LLM can’t adapt.

Has anyone ended up going from one frame work to the other or using a hybrid approach of both?

Question for you: have you considered using RAG to retrieve recipe texts instead of using tools?

At least from what you describe, while we could certainly work out some tool use stuff, this seems more like trying to retrieve specific recipes based on what the user is asking for, which, is something better designed for RAG.

This was a poor example. Here is a different example:
Let’s say you have a chatbot for a cookbook you wrote. Users can ask questions about the cookbook, ingredients, the author, modifying a recipe, or even creating a new recipe inspired by the cookbook recipes.

Flow 1 would be you determine what the user wants to do and then have different functions / completions to handle each task (modifying a recipe, answering questions about the author etc). Within each of these cases you can definitely use RAG to pull in relevant context (i.e. asking about where the author grew up). So the possible paths and functions are assigned in the code (if the user wants to ask about the author, use this completion and context, vs if the user wants to modify a recipe, use this function and context etc).

Flow 2 would be just creating some APIs / functions and letting the model decide when to use what tool and how to reason through the usage of each tool. So instead of defining possible paths you may just set 1-2 guardrails and let the LLM choose the tools and how to respond.

I’m curious if people have found one or the other more successful. On one hand defining a tree structure to a certain degree helps establish boundaries and prevents weird actions, but on flow 2, there is potential for the LLM to be able to orchestrate better in edge cases.