Limit on the number of functions definitions for Assistant

I have a large number of functions (~1000) that I need the Assistant to coordinate to help users complete their workflows. It looks like there is a limit of 128 functions at the moment, but it has been increasing. I wonder when I could expect to see the limit to increase to near 1000.

One of my functions with a single enum parameter is about 123 tokens.

I’m assuming the complete function is passed as context (can be wrong) but that would be 123k tokens of function calling which is already almost maxxing the 128k model and going to cost you over $1 per message. I would also think that the model wouldn’t be able to accurately select which function to use.

You may find that with such a large amount of tokens the model becomes bias towards the outer-edge items.

Lost in the Middle: How Language Models Use Long Contexts
https://arxiv.org/pdf/2307.03172.pdf

Have you considered fine-tuning a model to return the function name and parameters instead? Or even trying to categorize/group your 1,000 functions and instead have a multi-step process?

Fine-tuning lets you get more out of the models available through the API by providing:
Higher quality results than prompting
Ability to train on more examples than can fit in a prompt
Token savings due to shorter prompts
Lower latency requests

https://platform.openai.com/docs/guides/fine-tuning

1 Like

Awesome! That’s a lot of useful and important information. Thank you!

I somehow could get it to work a bit by putting the function call descriptions in a file in the “Retrieval” section, although the response is not as clean as in the “Functions”. What do you think?

It sounds like fine-tuning is the way to do. Do you have some example or insight on how to fine-fune a model and instruct it to act like with the Function Calls? I tried to search for what system message or instruction that OpenAI put in for the “Function Calls”, but I couldn’t find.

You can find an example for function-calling here:

https://platform.openai.com/docs/guides/fine-tuning/fine-tuning-examples

    "messages": [
        {"role": "user", "content": "What is the weather in San Francisco?"},
        {"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celcius\"}"}}
        {"role": "function", "name": "get_current_weather", "content": "21.0"},
        {"role": "assistant", "content": "It is 21 degrees celsius in San Francisco, CA"}
    ],
    "functions": [...] // same as before
}

It’s worth noting that it’s now called “tools”, but the format is still the same for now:

function_call and functions have been deprecated in favor of tools, however, the fine-tuning API still requires the legacy format at this time.

I’m a bit sus for the “at this time” part though.

1 Like

You could simply instruct or train on an output that you can recognize yourself:

{“task_number”: “335”, “query”: “giraffe grooming tips”}

1 Like