I’m looking for an explanation of how LLMs use/see the information passed in to the API parameters - specifically the tools that are specified.
When specifying a list of tools in the parameters to the API request, is it still necessary to declare and describe the list of tools in the system prompt? If so, why? Can’t the LLM see that I declared the list in the parameters? I can even pass in a description field in the JSON definitions of the tools. So do I still need to mention them in the prompt? If so, why?
Hello, let me break it down…
-
The language model only processes the text in the conversation.
-
Data in API parameters, like tool definitions and descriptions, are not automatically added to the text the model sees.
-
You need to include tool details in the system prompt if you want the model to reference them during its response.
-
The JSON definitions serve the API or middleware, but the model relies on explicit text in its context.
Hope that helps!
1 Like
No, but sometimes it helps to “emphasise” their presence or the presence of specific functions you wish to “prioritise”.
The system prompt is a lot about “emphasis” in general.
Yes you can.

So yes, you don’t have to mention them in the system prompt at all.
1 Like
Appreciate both the responses. Although they’re saying quite opposite things.
It’ll be very easy to prove which is correct. I’ll do a test where I provide an LLM with some tools in the parameters to the request, don’t mention it in the prompt, and ask it what/if any tools it has access to. I’ll report back what I find.
1 Like
Since the LLM doesn’t actually directly reach out to your tools and make function calls, it seems to me like there would be no other point in passing them into the request params if they weren’t being directly passed to the LLM as a way of informing the LLM about what tools are available. What other purpose would they serve?
1 Like
Wait, that’s not what you asked and I made no statement on that.
1 Like
Good point! It would still be an interesting test though.
2 Likes
Well it is possible actually.
I tried this on my own bot. It’s funny how I never tried this before.
My prompt was “You have a bunch of LLM tools I’ve shared with you in the prompt. What are they?”
This appears to work on ChatGPT too btw.
Conclusion: my bot has more tools 
Though I don’t think you can totally rely on the LLMs response being complete.
Sometimes it is more cagey about them:
“I’m here to assist with questions about dizziness and related conditions! However, I can’t share internal details or lists of tools and functions. But I can certainly help you find information or answer questions about dizziness. Just let me know what you’re curious about!”
Perhaps writing a function to list the functions might help. You might be able to direct the bot to be more consistently open via the system prompt.
Thanks. This confirms my suspicion. I don’t really care about whether or not the LLM will tell a user about the tools. I am more interested in whether the LLM knows about them or not without mentioning them in the prompt.
It just always feels redundant to me to pass in the full list of tool names, descriptions, etc., again when they’re already defined right there in the params.
Your point about emphasis is a good one though. I suspect it’s particularly important the longer your prompt gets.
Thanks again
1 Like