Setting a default function-calling function

Hello,

I am trying to set up the function-calling API so that it will always respond with a function call regardless of what the user prompts it with. I provide a list of possible commands, and would like it to default to a “SendMessage” function if none of the others are good matches. Is there a way to configure this via the API? I attempted to specify this in the system prompt but it didn’t work.

1 Like

Hi,

Here is an example function call

response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo-0613",
        messages=messages,
        functions=functions,
        function_call="auto",  # auto is default, but we'll be explicit
    )

Change “auto” to “your_function_name” and it should call it every time, regardless of the users input.

3 Likes

API functions

Introduction

Required: A careful reading and research of the issues faced by the person posing the question, and then developing custom solutions. Not mis-specifying a basic part of function calling.

“auto” is letting the AI decide when calling a function is appropriate.
Instead, to indicate a function is mandatory, we must specify the function name, in the form:
function_call={'name': 'encoded_output_maker'}


Also specified by @ai_user, that makes this case unique, was having a default, or fallback, function - one called when others are not needed of and for direct user output generation, but yet not disabling the capability for other functions to run.

Caveats

Even with all of this “required” parameter specified per documentation, the AI is quite resistant to generating typical user-facing output within a function. It is much more predisposed to rewriting, making short queries, outputting in formatted API containers. And barring that, will simply ignore your mandatory function.

Solution

So one must come up with a compelling innovative solution to meet this specific need (here one already conceived and living rent-free in my own mushy AI brain waiting for fruition). I am able implement and verify my expectations of what I could accomplish through understanding function behavior, and can share with you.

Challenges

So: the AI likes answering direct to the user when it needs to answer direct to the user as an AI-phrased response, ignoring all functions trying to grab that. How can we alter that behavior? By making it of major importance: safety.

What function almost requires an AI to obey? Moderation. You of course don’t moderate functions, you only moderate AI language. Perfect.

So we have AI sending its normal output for a simulated filthy word check. I write that, give a system prompt, and test.

We first observe the system prompt that permits other functions, a facet that can fine tuned by the strength of the wording vs the strength of the “required” field in the API call the AI receives.

Code example

    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        max_tokens=400,
        temperature=0.1,
        messages=[
            {
            "role": "system",
            "content": """You are Answerbot, a large language model trained by OpenAI.
Knowledge cutoff: 2021-09
Mandatory function: moderation endpoint, unless other function is needed first to answer"""
            },
            {
            "role": "user",
            "name": "user",
            "content": "How many grapes will fit in an orange?"
            }
            ],
        functions=[
            {
                "name": "moderation",
                "description": "AI produces its language not as user response, but instead creates it here for safety check",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "response": {
                            "type": "string",
                            "description": "AI produces its output not as user response, but instead function response"
                        },
                    },
                    "required": ["auto"],
                },
            }
        ],
        function_call={'name': 'moderation'},
    )
    print('--------------------')
    print(response)

Response (raw)

  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "function_call": {
          "name": "moderation",
          "arguments": "{\n  \"response\": \"The number of grapes that can fit in an orange depends on the size of the orange and the size of the grapes. Generally, an average-sized orange can hold about 10-12 grapes. However, this can vary depending on the specific sizes of the orange and grapes.\"\n}"
        }
      },
      "finish_reason": "stop"
    }
  ],

Conclusion

Our function instruction? Successful.

A normally-worded user question is placed into a function call.

Additional tuning recommended

The length and complexity of the answering, task, and other distracting functions will alter the transition and threshold between AI narrator output and calling other functions to perform tasks before that output. This must be re-balanced in iteration with actual examples of inputs.

Further thoughts: Conversation history

If the AI cannot see that it called a function and is instead shown repeated past history of it just answering, it may be trained by that and thus avoid the function later.

On followup calls besides the first, you should place a simulated “function” role message to show the AI it actually got some response from the fake moderation method. That same function name, and a value like {“flagged”: false}. This should encourage the continued output function calling.

Summary:

  • create a required “moderator” function, that implies it will check AI language
  • receive normal output as function content
  • because of its placement in the workflow, other functions permissible