Tool Calls - Does the Schema Matter?

Several posts have mentioned that certain tools (e.g. Pydantic) generate a valid JSON schema that is somewhat similar, but distinct, from the example schema shown in the docs.

As far as I can tell, aside from that one example in the docs (shown below) and a few in the cookbook, no official guidance has been provided on how strictly this schema must be followed.

OpenAI docs schema example:

from openai import OpenAI
client = OpenAI()

tools = [
  {
    "type": "function",
    "function": {
      "name": "get_current_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA",
          },
          "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
        },
        "required": ["location"],
      },
    }
  }
]
messages = [{"role": "user", "content": "What's the weather like in Boston today?"}]
completion = client.chat.completions.create(
  model="gpt-4o",
  messages=messages,
  tools=tools,
  tool_choice="auto"
)

Contrasting this with a schema generated by Pydantic instead:

class GetCurrentWeather(BaseModel):
    """Get the current weather in a given location"""
    location: str
    unit: Literal["celsius","farenheit"] | None = None

g = GetCurrentWeather(location="san francisco",unit='farenheit')
g.model_json_schema()

>>>

{'description': 'Get the current weather in a given location',
 'properties': {'location': {'title': 'Location', 'type': 'string'},
                'unit': {'anyOf': [{'enum': ['celsius', 'farenheit'],
                                    'type': 'string'},
                                   {'type': 'null'}],
                         'default': None,
                         'title': 'Unit'}},
 'required': ['location'],
 'title': 'GetCurrentWeather',
 'type': 'object'}

The results have some high-level similarities, but overall the structure, nesting, included fields etc is quite different.

Empiricially this works fine, but it’s essentially “undocumented” behavior since the docs specify a different schema. My question then is - is there really a specific schema required, or is it enough to simply provide valid JSON that would have fields relating to arguments, the name of the function, descriptions, etc.

If there is a specific schema required, where is it documented in full detail, if at all, and is the fact that other schemas work essentially a “lucky accident”?

Why not just try to follow the spec?

One can always introspect the function (as opposed to the “lazy” way of model_json_schema)

This is one of the extracted function that I have:

@tools_function(TOOLS_FUNCTIONS)
def get_cost_of_running_a_thread_on_a_model(
        provider: Annotated[str, "This is the provider providing the model such as openai/groq. default is openai"],
        model: Annotated[str, "This is the model in description. This is typically prefixed by %"],
        thread: Annotated[str, "This is the thread whose cost is required. This is typically prefixed with !"]        
        ):
    """ This function provides the cost of running a thread on a model. Because the cost varies by different providers
        with different SLAS, it is important to identify the provider from the prompt. The default provider is openai; 
        but could be different such as groq, google, etc. 
        
        If the provider is openai, the default model is gpt-4-turbo. If the provider is groq, the default model is llama3-8b-8192.
        else If the provider is google, the default model is gemini-it.         
    """
    return f"{provider} {model} {thread}"

Where tools_function is here:

def tools_function(tools_functions): 
    def wrapper(func):

        function = dict()
        function['function'] = func
        function['name'] = func.__name__
        function['description'] = func.__doc__
        function['parameters'] = {}
        function['parameters']['type'] = "object"
        function['parameters']['properties'] = {}

        

        input_arg_names = [arg_name for arg_name in func.__code__.co_varnames[:func.__code__.co_argcount]]

        for input_arg_name in input_arg_names:
            function['parameters']['properties'][input_arg_name] = {}
            raw_annotation = func.__annotations__[input_arg_name]


            if raw_annotation.__origin__.__name__ in FUNCTIONS_TYPE_MAP:
                ip_type = FUNCTIONS_TYPE_MAP[raw_annotation.__origin__.__name__]
                function['parameters']['properties'][input_arg_name]['type'] = ip_type

                if ip_type == 'array':
                    function['parameters']['properties'][input_arg_name]['items'] = {}
                    ip_item_type = raw_annotation.__origin__.__args__[0].__name__
                    if ip_item_type in FUNCTIONS_TYPE_MAP:
                        function['parameters']['properties'][input_arg_name]['items']['type'] = FUNCTIONS_TYPE_MAP[ip_item_type]
                    else:
                        function['parameters']['properties'][input_arg_name]['items']['type'] = ip_item_type

            else:
                ip_type =  raw_annotation.__origin__.__name__
                function['parameters']['properties'][input_arg_name]['type'] = ip_type


            function['parameters']['properties'][input_arg_name]['type'] = ip_type
            function['parameters']['properties'][input_arg_name]['description'] = raw_annotation.__metadata__[0]

        tools_functions[func.__name__] = function


        return func
    return wrapper

Well for starters, because there is no official spec as far as I know, which is part of why I asked. Also because many libraries leverage Pydantic (Instructor for example) and pass the schema through as generated, without modification - and it works very well.

I don’t understand the point you are making re: introspection and using model_json_schema being the lazy way? I think you are saying to build up the spec yourself using metaprogramming? I mean… yes that is one option.

yeah.

I have given up on openAI giving us technical specs (a formal grammar) on most about anything. I just try to follow whatever meager documentation that I can find and formalize some automation on top of it.

3 Likes