Function calling for list of user prompts

Hi

Use Case:
I am working a use case where i have a list of prompts(more like lnstructions )

user: "prompt 1\n
             prompt 2\n
             prompt 3"

also i have a list of actions defined on my side with different set of parameters for each action.


functions:[
{
            name: "action1",
            description: "<some description>",
            parameters: {
                type: "object",
                properties: {
                    p1: {
                        type: "string",
                        description: "<desc>.",
                    },
                },
{
 name: "action2",
            description: "<some description>",
            parameters: {
                type: "object",
                properties: {
                    p1: {
                        type: "string",
                        description: "<desc>.",
                    },
 p2: {
                        type: "string",
                        description: "<desc>.",
                    },
}
{
 name: "action3",
            description: "<some description>",
            parameters: {
                type: "object",
                properties: {
                    p1: {
                        type: "string",
                        description: "<desc>.",
                    },
 p2: {
                        type: "string",
                        description: "<desc>.",
                    },
 p3: {
                        type: "string",
                        description: "<desc>.",
                    }
}]

Problem Statement:
Is it possible to use function calling api to perform action mapping for each prompt with an action and determine the parameters in one call?

Broadly speaking, yes.

Take this example :

gc = GoalComposer(provider="OpenAI", model="gpt-4o-mini")
gc = gc(global_context="global_context")

gc\
    .goal("correct spelling", with_goal_args={'text': """I wonde how the world will be sustainable in 100 years from now. We use much fossil fuel. 
                                              we not care for enviorment. 
                                               """})\
    .goal("summarize text")

Here we are trying to correct the spelling and then summarize the text through user prompts of “correct spelling” and “summarize text”.

corrected text: I wonder how the world will be sustainable in 100 years from now. We use too much fossil fuel. We do not care for the environment.
summarized text : The world's sustainability in 100 years is uncertain due to excessive fossil fuel usage and neglect of environmental concerns

The functions underneath which are being called are defined here:

@manage_function(TOOLS_FUNCTIONS, "text_functions")
def conform_text(text:Annotated[str, "text which has to conform to spelling and grammar"], 
                 provider:Annotated[str, "LLM provider to use"]) \
    -> Annotated[dict,
                 """
                 :return: Returns a dictionary with keys
                 - conformed_text(str): text which has the applied spelling and grammar  
                 """]:  
    """ This function corrects a given piece of text from a spelling and grammar standpoint"""


    messages = []
    messages.append({"role": "system", "content": "You are a grammar and spelling expert in English. Your job is to apply  the grammar and spelling rules to the given text ONLY. NO commentary is required."})
    messages.append({"role": "user", "content": text})
                    
    chat_completion = openai_chat.chat.completions.create(
            messages = messages,
            model=MODEL_OPENAI_GPT4_MINI,
            temperature=0.1,
    )    
    
    print(f"corrected text: {chat_completion.choices[0].message.content}" ) 

    return {'conformed_text': chat_completion.choices[0].message.content}

@manage_function(TOOLS_FUNCTIONS, "text_functions")
def summarize_text(text:Annotated[str, "text to be summarized"], 
                   provider:Annotated[str, "llm provider to use such as openai/groq"],
                   model:Annotated[str, "model from specific llm provider to use"]) \
    -> Annotated[dict,
                 """
                 :return: Returns a dictionary with keys
                 - summarized_text(str): text which is summarized   
                 """]:
    """ This function summarizes given piece of text"""

    messages = []
    messages.append({"role": "system", "content": "You are summarization expert in English. Your job is to summarize each paragraph into one sentence; ONLY linguistically. DONOT INTERPRET.  ONLY return the summarized text. NO commentary is required."})
    messages.append({"role": "user", "content": text})
                    
    chat_completion = groq_chat.chat.completions.create(
            messages = messages,
            model=MODEL_N1,
            temperature=0.1,
    )    



    print(f"summarized text : {chat_completion.choices[0].message.content}" ) 
    return {'summarized_text': chat_completion.choices[0].message.content}

The mapping is done with function calling. Obviously in this use case, it is even more complex because this involves function chaining. But the definition of these functions should be useful to provide broad guidelines on how to approach.

basically, yes. and the 3 functions will be invoked in parallel, at the same time.

for example,

user prompt:

please tell me the weather in tokyo tomorrow. also tell me about any events happening in sapporo on saturday evening and search for hotel near nakajima park.

tools:

get_weather({ date: ‘2024-09-19’, location: ‘Tokyo’})
get_events({ date: ‘2024-09-21’, time: ‘18:00’, location: ‘Sapporo’})
search_hotels({ checkin: ‘2024-09-21’, location: ‘Nakajima Park, Sapporo’})

1 Like

Hi @supershaneski @icdev2dev
Thanks for the response. I tried example which @supershaneski provided and i used gpt4 for this. But its only providing me one function in the response[tool_calls]

code:

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the weather details for provided date and location.",
            "parameters": {
                "type": "object",
                "properties": {
                    "date": {
                        "type": "string",
                        "description": "Date for which weather details are required.",
                    },
                    "location": {
                        "type": "string",
                        "description": "location for which weather details are required.",
                    },
                },
                "required": ["date","location"],
                "additionalProperties": False,
            },
        }
    },
    {
        "type": "function",
        "function": {
            "name": "get_events",
            "description": "Get the event details for provided date, time and location.",
            "parameters": {
                "type": "object",
                "properties": {
                    "date": {
                        "type": "string",
                        "description": "Date for which event details are required.",
                    },
                    "time": {
                        "type": "string",
                        "description": "time for which event details are required.",
                    },
                    "location": {
                        "type": "string",
                        "description": "location for which event details are required.",
                    },
                },
                "required": ["date","time","location"],
                "additionalProperties": False,
            },
        }
    },
    {
        "type": "function",
        "function": {
            "name": "search_hotels",
            "description": "Search hotels for provided location and date.",
            "parameters": {
                "type": "object",
                "properties": {
                    "checkin": {
                        "type": "string",
                        "description": "date for checkin."
                    },
                    "location": {
                        "type": "string",
                        "description": "location for which hotels are required."
                    },
                },
                "required": ["checkin","location"],
                "additionalProperties": False,
            },
        }
    }
]

llm_using_tools = chat_llm.bind_tools(tools)

messages = [ 
            {"role": "user", "content": "please tell me the weather in tokyo tomorrow. also tell me about any events happening in sapporo on saturday evening and search for hotel near nakajima park."} ]
tool_response = llm_using_tools.invoke(messages)
print(tool_response.additional_kwargs)

output

{'tool_calls': [{'id': 'call_64MBvStebCTv7HiQGoIwaq91', 'function': {'arguments': '{\n  "date": "tomorrow",\n  "location": "Tokyo"\n}', 'name': 'get_weather'}, 'type': 'function'}]}

Am i missing something here?

there might be something wrong in the code since the invoked tools should contain the 3 functions. are you iterating on the tool_calls?

here is a sample playground implementation:

Please note that in actual code, today should probably be dynamic since the assistant needs date and time parameters in tools.

`Today is ${new Date()}.

Here is the chat playground conversation you can test.

1 Like

Following is the whole response i got

content='' additional_kwargs={'tool_calls': [{'id': 'call_..', 'function': {'arguments': '{\n  "date": "2022-03-18",\n  "location": "Tokyo"\n}', 'name': 'get_weather'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 28, 'prompt_tokens': 290, 'total_tokens': 318}, 'model_name': 'gpt-4-32k', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None} id='run-..'

the only difference i see is that you had used gpt-4o-mini, I am using gpt-4-32k would that be a problem?

1 Like

[

Which models support function calling?

](https://platform.openai.com/docs/guides/function-calling/which-models-support-function-calling)

Function calling was introduced with the release of gpt-4-turbo on June 13, 2023. This includes: gpt-4o, gpt-4o-2024-08-06, gpt-4o-2024-05-13, gpt-4o-mini, gpt-4o-mini-2024-07-18, gpt-4-turbo, gpt-4-turbo-2024-04-09, gpt-4-turbo-preview, gpt-4-0125-preview, gpt-4-1106-preview, gpt-4, gpt-4-0613, gpt-3.5-turbo, gpt-3.5-turbo-0125, gpt-3.5-turbo-1106, and gpt-3.5-turbo-0613.

Legacy models released before this date were not trained to support function calling.

Parallel function calling is supported on models released on or after Nov 6, 2023. This includes: gpt-4o, gpt-4o-2024-08-06, gpt-4o-2024-05-13, gpt-4o-mini, gpt-4o-mini-2024-07-18, gpt-4-turbo, gpt-4-turbo-2024-04-09, gpt-4-turbo-preview, gpt-4-0125-preview, gpt-4-1106-preview, gpt-3.5-turbo, gpt-3.5-turbo-0125, and gpt-3.5-turbo-1106.

3 Likes