I am using openai’s function call functionality with gpt-3.5-turbo-16k model and it has limit for function count: 64. Is there a limit on the maximal count of functions in function call functionality for gpt 4? if so how much is it?
I would imagine it is simply down to how many tokens are taken up in the function details.
Completely dependant on how many calls you make (If you exceed the limit, you should get back a ratelimit error) or how many tokens in each call are being used
So we can send more than 64 functions in one request right? depending on its description’s length
In theory I see no reason why not, now that is not to say there is not some 128 function limit or some other thing that may present a barrier, but I do not see a reason why it is not effectively only constrained by the token limit.
I will add that AI’s can perform poorly when presented with too many options or too much data, the way attention is distributed across a prompt can be an issue… this may be a limiting factor
Currently i am using gpt-3.5-turbo-16k model and sending request to https://api.openai.com/v1/chat/completions with 64 functions and its cool. But when i add one more function to my reqest body i am receiving:
{
“error”: {
“message”: “‘$.functions’ is too long. Maximum length is 64, but got 65 items.”,
“type”: “invalid_request_error”,
“param”: null,
“code”: null
}
}
Is there any model or something that i can avoid this problem?
Ok… well there is your limit! Good to know and I imagine that post you just made will be linked to a fair few times, thank you for doing the leg work and getting a definitive answer!
In answer to your question, it looks like 64 is it for now.
I figured out a way to get around this limit by assigning different “assistants” for each topic the user might ask about. So, my main, top-level assistant determines if the user is talking about customers, orders, or products using a function call… a total of 3 functions. Let’s say the user is asking about orders. Then, I pass it to a new lower-level assistant, specifically designed to handle order-related functions. This assistant currently has 17 functions and by dividing them between customers, orders, and products, I have more room to expand. It does cost more because of the extra API call, but the speed difference isn’t a big deal when using gpt-3.5-turbo.
You can also write functions into the system prompt in the same manner as they are injected by the function code.
I imagine that coherence is significantly degraded when you have so many functions at such a long context length of instructions.
Sorry to add this question into the conversation, but I want to clarify something, as I’m new to ChatGPT functions.
Using the code sample below (subbing gpt-4 if one would like), is the 64 limitation that’s being discussed here referring to
- the number of
functions
that ChatGPT can choose to call from?
Or - getting ChatGPT to call multiple (up to 64) functions in a single request (
function_call
)?
If one is able to have ChatGPT call multiple endpoints after a single chat entry (from a user), is there some established documentation on how to encourage ChatGPT to call multiple endpoints (presumably via function_call
), or control the order of execution? I’ve been looking in the Function Calling Documentation and OpenAI’s further reading on the matter, but can’t seem to find if item #2 was possible to accomplish.
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
messages=messages,
functions=functions,
function_call="auto", # auto is default, but we'll be explicit
)
Thanks in advance for the consideration.
Strictly speaking, this conversation has nothing to do with ChatGPT, which is the web chatbot.
It is talking about the number of functions you pass to the API and make the AI aware of.
The error message described in prior forum messages is the API refusing to embed more function containers.
The AI must know every function you would want it to be aware of and possibly call every API call. However, you can likely chop that in half or more just by understanding the context of the conversation or completion point of particular workflow.
It would take 64 chats to invoke each just once, and would still require the specific language that would require that type of data interaction.
So the other workaround would be to have a single function that takes an enum of operations to perform. You could have hundreds of possible operations.
Thank you for the clarification @_j , much appreciated here. So, it sounds like there is (currently) a restriction that the ratio of chat:function calls is strictly 1:1; and, there’s (currently) no way to have the API accept a single chat, then perform N number of functions?
That would seem to be correct that you can expect one function back:
- The AI has been trained to do so;
- The developer would have a more complex task of handling multiple functions;
- The function calling recognizer that rewrites AI language for function call from “content” into its own API response field would have to scan for multiples;
- Multiple “function_call” of the same key value could be application-breaking.
Typical return when a function is invoked, no AI language in “content”:
“message”: {
“role”: “assistant”,
“content”: null,
“function_call”: {
“name”: “python”,
“arguments”: “import sympy\n\nprime = sympy.prime(1337)\nprime”
}
although it can also reply at the same time that it invokes a function, saying “let’s see this”:
"message": { "role": "assistant", "content": "To find the 1337th prime number, we can use the following algorithm:\n\n1. Initialize a variable `count` to 0 and a variable `num` to 2.\n2. Repeat the following steps until `count` is equal to 1337:\n - Check if `num` is a prime number.\n - If `num` is a prime number, increment `count` by 1.\n - If `count` is equal to 1337, break the loop.\n - Otherwise, increment `num` by 1.\n3. The 1337th prime number is the value of `num`.\n\nLet's implement this algorithm in code.", "function_call": { "name": "python",
The action of the AI when it receives the same system and user messages, but also gets back function returns, is to continue calling functions if the return result didn’t meet its expectations, so you can indeed have multiple functions called, but as multiple turns.
(one thing you can consider is some logic in your code: If the AI just called “search_bing”, you could then only give it additional functions for “search_google”, “search_duck”.)
OpenAI didn’t really document function calling in a chat history setting, but it seems that the function role return is one you’ll also want to maintain in chat history, so the AI has function calling history. Otherwise it can loop calling the same pattern of functions over and over.
The present AI can’t accurately follow eight behavior instructions in the systems prompt; good luck with 64 functions.
Furthermore, the AI will fall into infinite loops where it calls the same function in each iteraction if you leave it completely to its decision. I’m experimenting with rules and guidelines now to try to pass this limitation.