I have a tonne of tools, and the model works pretty well with all of them, but I was wondering if any of you folks has hit a limit on the number of tools one model can handle?
Really looking into whether I should create a whole new architecture to route the best instructions and tools after each message to reduce the complexity of instructions and tools?
Since the tools payload can be updated at runtime I’m under the impression that any given Run is limited to this 128 max. That said, it seems like you could have more. For example Run 1 could use tools 1-128 and then Run 2 could use tools 129-256, etc.
Honestly, if you can (and if this post isn’t already dead) providing a set of relevant tools is typically the approach I’d recommend here.
I’ve personally noticed the more tool options an LLM has, the greater the chances for pulling an incorrect tool, or hallucinating something somewhere. This has been true for multiple language models not just OAI’s set.
Because of that, my code usually involves providing as little tools as possible, and only providing what’s absolutely needed at runtime.
We’ve got an “infinite” amount of tools we can use. The way we did it, was to create a VSS database, where we can define description of how to use tools as a part of the snippets we’re sending to OpenAI as we’re searching for context during the RAG process.
This allows us to define the most frequently used tools directly into the system message, allowing the LLM to always know about these. While we’re adding less frequently used tools as RAG data in a VSS database, matching these according to the specified prompt. These are extracted on demand, transmitted as descriptions of how to respond given specific requests.
Typically they resemble the following:
## Send an email
If the user asks you to send an email,
then return the following function
specification:
... specification for function invocation here ...
Then the function specification is our own internally developed format that we then parse in our middleware that does the actual function invocation, before it transmits the result off the function invocation back to OpenAI for another roundtrip.
The system is completely 100% custom built, and doesn’t rely upon the assistants API or any other tech from OpenAI, besides the LLM itself - Which is actually very good at returning the correct format according to our experiences.
You can try it out below by clicking the blu button in the bottom / right corner and ask it to for instance; “Search for housemaid services in Larnaca, scrape the first 3 companies you find, and return a summary. Then construct an email for each of these companies and ask them for a quote for two hours of dishwashing tomorrow.”
We don’t have “send email” in our publicly available chatbot, for obvious reasons - But if we did, you could just tell it “Send the emails” after it responds to you, and it would match our “Send email VSS database record” which provides instructions for how to send the email, resulting in that all 3 emails would be sent …