Gpt-3.5 /4 only returning arguments for one function call if 'tool_choice' parameter is used

Hello,

I have a gpt call that is instructed to extract titles and their respective authors from a body of text use them for as function arguments.

However, if there are multiple titles and authors in the body of text, whenever I explicitly use tool_choice, both GPT 3.5 and 4 both only produce arguments for one of the titles and authors. Never both.

I would like it to be able to use ‘tool_choice’ and have the model return multiple argument pairs instead of just one pair.

Hey!

Can you give a concrete example please? Function calling is supposed to work, and it’s possible to improve with prompt engineering.

Yeah sure. Thanks for the reply!

## GPT Call to get Book Data
from openai import OpenAI
import os

api_key = os.environ.get('open_ai')

def gpt_get_book_data(message_containing_book_info,):
    client = OpenAI(api_key=api_key)
    system_role = (
    "You are a bot that gets book data from an API using the title and author as arguments "
    "You read a body of text, extract all titles and authors, and use them as arguments for a function. "
    "You are to use the function 'get_book_data' "
    "\n"
    )

    additional_system_role_attributes = (
        #f"The current date is {current_date}\n"
        "There may be multiple titles and authors in the text.  Make sure you get tham all\n"

    )

    book_functions = [
        {
            "type": "function",
            "function": {
                "name": "get_book_data",
                "description":"Gets book data from open library API",
                "parameters":{
                    "type": "object",
                    "properties": {
                        "title": {
                            "type":"string",
                            "description":"The title of a book"
                        },
                        "author": {
                            "type":"string",
                            "description":"the author of a book"
                        },
                    },
                        "required":["title","author"]
                    },
                    
                }
            }
        
    ]

    chat_history = [
        {"role": "system", "content": f"{system_role}{additional_system_role_attributes}"},
        {"role": "user", "content": f"{message_containing_book_info}"}   
    ]

    response = client.chat.completions.create(
        model="gpt-3.5-turbo-0125",
        messages=chat_history,
        tools=book_functions,
        tool_choice= {"type": "function", "function": {"name": "get_book_data"} },
        temperature=0,
  
        
        
    )
    return response
    
book_message = (
    '"Dune" by Frank Herbert is a mesmerizing tale set on the desert planet of Arrakis, known for its valuable spice melange. ' 
    'The story follows the noble family of House Atreides as they navigate political intrigue, navigate treacherous environments, ' 
    'and encounter complex characters in a richly detailed universe.\n\n'
    '"The Three-Body Problem" by Liu Cixin delves into the mysteries of physics and the complexities of human nature when alien '
    'contact disrupts the world. The narrative unfolds through a blend of historical events, intricate scientific theories, and a thrilling plot ' 
    'that keeps readers on the edge of their seats. A must-read for sci-fi enthusiasts craving a unique and thought-provoking experience.'
)

response = gpt_get_book_data(message_containing_book_info=book_message)

This returns

ChatCompletion(id='chatcmpl-9APi8g0QKneuEqn0ubMK1RAuFS7kE', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_oS2Palu6R5fpI2ZWLzz5Jt4s', function=Function(arguments='{"title":"Dune","author":"Frank Herbert"}', name='get_book_data'), type='function')]))], created=1712269140, model='gpt-3.5-turbo-0125', object='chat.completion', system_fingerprint='fp_b28b39ffa8', usage=CompletionUsage(completion_tokens=11, prompt_tokens=286, total_tokens=297))

As you can see in the book_message string, there are 2 titles. However, only one is recognized and used as an argument pair. I haven’t messed around with the prompt too much. So maybe that’s the issue?

Additionally, I have tried to do it without the tool_choice argument, all else equal and it successfully recognizes both titles ~70-85% of the time. This is okay but not sufficient for my purposes.

Thanks again!

Interesting that you got the correct result. I’m assuming you added the function in there as well?

Would you happen to have the code snippet generated from you entry on Promptotype?

I ran it on there and only got one result again. I think the format needs tweaked and I want to ensure that I am doing the same.

Here is my snippet from Promptotype

from openai import OpenAI
from openai.types.chat import (
    ChatCompletionMessageParam,
    ChatCompletionToolChoiceOptionParam,
    ChatCompletionToolParam,
)
from typing import List, Literal, TypedDict


class ChatMessage(TypedDict):
    role: Literal["system", "user", "assistant"]
    content: str


message_0 = """You are a bot that gets book data from an API using the title and author as arguments. 
You read a body of text, extract all titles and authors, and use them as arguments for a function.
You are to use the function 'get_book_data'.

There may be multiple titles and authors in the text.  Make sure you get them all"""
message_1 = """"Dune" by Frank Herbert is a mesmerizing tale set on the desert planet of Arrakis, known for its valuable spice melange. 
The story follows the noble family of House Atreides as they navigate political intrigue, navigate treacherous environments, and encounter complex characters in a richly detailed universe.


"The Three-Body Problem" by Liu Cixin delves into the mysteries of physics and the complexities of human nature when alien contact disrupts the world. The narrative unfolds through a blend of historical events, intricate scientific theories, and a thrilling plot that keeps readers on the edge of their seats. A must-read for sci-fi enthusiasts craving a unique and thought-provoking experience."""
message_2 = """"""
  
message_templates: List[ChatMessage] = [
    {"role": "system", "content": message_0},
    {"role": "user", "content": message_1},
    {"role": "assistant", "content": message_2},
]


class InputVariables(TypedDict):
    pass


prompt_tools: List[ChatCompletionToolParam] = [
    {
        "type": "function",
        "function": {
            "name": "get_book_data",
            "description": "Gets book data from open library API",
            "parameters": {
                "type": "object",
                "properties": {
                    "title": {
                        "type": "string",
                        "description": "The title of a book"
                    },
                    "author": {
                        "type": "string",
                        "description": "the author of a book"
                    }
                },
                "required": [
                    "title",
                    "author"
                ]
            }
        }
    }
]
prompt_tool_choice: ChatCompletionToolChoiceOptionParam = {
    "type": "function",
    "function": {
        "name": "get_book_data"
    }
}

model_name: str = "gpt-3.5-turbo-0125"
model_max_tokens: int = 150
model_seed: int = 1
temperature: float = 0
timeout: int = 15


def format_message(
    message_template: ChatMessage, input_variables: InputVariables
) -> ChatCompletionMessageParam:
    prompt_formatted = message_template["content"].format(**input_variables)

    if message_template["role"] == "system":
        return {
            "role": "system",
            "content": prompt_formatted,
        }
    elif message_template["role"] == "assistant":
        return {
            "role": "assistant",
            "content": prompt_formatted,
        }
    else:
        return {
            "role": "user",
            "content": prompt_formatted,
        }


def format_message_templates(
    message_templates: List[ChatMessage],
    input_variables: InputVariables,
) -> List[ChatCompletionMessageParam]:
    return [
        format_message(message_template, input_variables)
        for message_template in message_templates
    ]


def call_openai(input_variables: InputVariables, openai_api_key: str):
    client = OpenAI(api_key=openai_api_key, timeout=timeout)

    formatted_messages: List[ChatCompletionMessageParam] = format_message_templates(
        message_templates, input_variables
    )
    return client.chat.completions.create(
        messages=formatted_messages,
        model=model_name,
        temperature=temperature,
        tools=prompt_tools,
        tool_choice=prompt_tool_choice,
        max_tokens=model_max_tokens,
        seed=model_seed,
    )

And here is the result

Sure.

First, I see your code has an extra message (empty assistant, not sure if it matters or not).
Also- I was using “auto” prompt_tool_choice.

Here is my code:

message_0 = """You are a bot that gets book data from an API using the title and author as arguments.
You read a body of text, extract all titles and authors, and use them as arguments for a function.
You are to use the function 'get_book_data'.

There may be multiple titles and authors in the text.  Make sure you get them all."""
message_1 = """"Dune" by Frank Herbert is a mesmerizing tale set on the desert planet of Arrakis, known for its valuable spice melange.
The story follows the noble family of House Atreides as they navigate political intrigue, navigate treacherous environments, and encounter complex characters in a richly detailed universe.


"The Three-Body Problem" by Liu Cixin delves into the mysteries of physics and the complexities of human nature when alien contact disrupts the world. The narrative unfolds through a blend of historical events, intricate scientific theories, and a thrilling plot that keeps readers on the edge of their seats. A must-read for sci-fi enthusiasts craving a unique and thought-provoking experience."""
  
message_templates: List[ChatMessage] = [
    {"role": "system", "content": message_0},
    {"role": "user", "content": message_1},
]


class InputVariables(TypedDict):
    pass


prompt_tools: List[ChatCompletionToolParam] = [
    {
        "type": "function",
        "function": {
            "name": "get_book_data",
            "description": "Gets book data from open library API",
            "parameters": {
                "type": "object",
                "properties": {
                    "title": {
                        "type": "string",
                        "description": "The title of a book"
                    },
                    "author": {
                        "type": "string",
                        "description": "the author of a book"
                    }
                },
                "required": [
                    "title",
                    "author"
                ]
            }
        }
    }
]
prompt_tool_choice: ChatCompletionToolChoiceOptionParam = "auto"

model_name: str = "gpt-3.5-turbo-0125"
model_max_tokens: int = 150
model_seed: int = 1
temperature: float = 0
timeout: int = 15

Ok with a further test I see that the tool_choice is the issue: using “auto” calls the function twice as intended, and using a specific function only calls it once.

It’s possible that is intended behavior on the api side, not sure about it and couldn’t find supporting documentation.

Maybe it is API side. When you used tool_choice it only called once? Just like in mine?

Yes.

Try changing to ‘auto’.

1 Like

Thank you for your help so far. Let me know if you find a way to get tool_choice to work for multiple functions in the future.

Ok!

Sorry, I think I might just realize now that your problem was that you wanted to use with explicit tool choice (I thought you were just trying to make parallel function call work).

I think it’s possible this is by design on their side.

Do you want to share more info about your use case? Maybe there’s a prompt engineering workaround that will let you use tool choice auto?