Chat Completions API Function Call Description Limit

Hey Everyone,

I am building an application that is very Function-Call intensive, and I have been utilizing the function calling mechanisms from the Assistants APi. The entire idea, is that I have an assistant in my app that is able to carry out the user’s intended tasks, using function calling.

I have noticed that the Assistants API, when using the GPT-3.5 model, has a very large limit when creating prompts. I was able to create a very massive prompt in the function object description, which gives definitive details on how the AI should carry out a given task.

The only problem is that the Assistants API function call mechanisms tend to use previous messages from the thread as context during its function call, which is not very viable for my application.

I wanted to shift my function calling mechanism to the Chat Completions API, but noticed that I can’t have as large of a prompt as I could with the Assistants API, because of the prompt token limitations. Is there any way to get around this limitation?

I would like to keep the entire prompt for the function call, so the JSON object payload creation works as desired.

GPT 3.5 turbo has a 16,385 token limit (https://platform.openai.com/docs/models/gpt-3-5-turbo)

how long is your prompt? https://platform.openai.com/tokenizer

It says my prompt in the function call description is currently 5843 tokens. I still get an error showing that the prompt is too long, why is that? Is there a limit to prompt tokens for the function object description w/ the Chat Completions API?

I don’t know where you gathered that conclusion…

Assistants has a limit of 32768 characters for a lot of its inputs like user messages and function returns.

Chat Completions, depending on the AI model, you can send closer to 400000 characters - 124k of AI tokens to allow a 1k response when using GPT-4-Turbo models.

Limit: 128 functions defined; the yaml API specification doesn’t give limits on the string length that can be returned in the the role message.
No limit on the number of messages (about 20k if you wrote a 1 token in, 1 token out bot)

    ChatCompletionRequestFunctionMessage:
      type: object
      title: Function message
      deprecated: true
      properties:
        role:
          type: string
          enum: ["function"]
          description: The role of the messages author, in this case `function`.
        content:
          nullable: true
          type: string
          description: The contents of the function message.
        name:
          type: string
          description: The name of the function to call.
      required:
        - role
        - content
        - name

You can maintain chat history as long as you want, but the history and addition of function calls must be lossless until the AI finally responds to the user, so AI can see what failed and what been iteratively invoked. A call you might be sending after the AI has made several calls on its own.

system: you are a function bot
user: what’s the population of bermuda’s capital today
assistant: cia_factbook.list_countries(“North_America”)
function: “Canada, Bermuda, etc”
assistant: cia_factbook.capitals(“Bermuda”)
function: “Bermuda: Hamilton”
assistant: cia_factbook.population_city(“Bermuda: Hamilton”)
function: “866”

1 Like