Chat completion is explaining the functions instead of actually calling them

I’m finding the new functions parameter incredibly useful, but sometimes I’m having an issue where the AI describes the functions rather than call them. For example, in my system prompt, I say this:

The goal is to determine whether this business process is started manually or by a trigger.

and I have a function defined like this:

{
	"name": "set_start_event_type",
	"description": "Records whether the start event is a manual event or a triggered event",
	"parameters": {
		"type": "object",
		"properties": {
			"event_type": {
				"type": "string",
				"description": "Whether the start event is manual or triggered",
				"enum": [
					"manual",
					"triggered"
				]
			}
		},
		"required": ["event_type"]
	}
}

I then describe by business process to the AI. Sometimes it will correctly call the function as expected, but it also frequently responds with an assistant message like this:

You may use the function `set_start_event_type` to record this detail:

functions.set_start_event_type({
  "event_type": "triggered"
});

In other words, it tells the user about the functions! Has anyone experienced this before, or have any tips of how I can avoid this problem?

Welcome to the Forum!

Have you explicitly told the model is must call the function? i.e. “You must call the ‘set_start_event_type’ function at all times”

Thanks for the reply! Yes, I have tried that, and it has the same result. Actually, I thought that was the reason it was exhibiting this behaviour at first, so I removed it, but it kept happenning anyway. It seems that as long as the function exists, GPT may choose to interpret it as something the user might do.

That said, I have another use case which works pretty well. I’m wondering if it’s related to the nuances of how the function is defined and named, etc. I’ll experiment some more and report back but I’d love to hear of anyone else dealing with this problem.

Interesting, how is your function call setup i.e. do you have a function_call="auto" ? if you do or do not, try modifying/adding it to be function_call="set_start_event_type" like this:

(Note the slight change in the function description)

response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "{prompt}"],
    functions=[
        "name": "set_start_event_type",
	    "description": "Must always be called function that records the start event is a manual event or a triggered event",
         ##... rest of your function def
    ],
    function_call="set_start_event_type",
)

message = response["choices"][0]["message"]
print(message)

Yes, it’s set to auto. I have tried setting function_call="set_start_event_type", and although that does succesfully call my function, it does so every single time. This isn’t what I want, as the whole idea is that the user discuss back-and-forth with my chatbot until the event type is established, at which point the function should be called. Perhaps there’s a better pattern for achieving something like this, but I’ve used it successfully in another use case.

Perhaps you need to inform the user they should make it clear when they need the AI to do something, if it’s ambiguous, then that will be the cause.

If you have an example prompt that you think is clear but the AI did not respond it might be helpful to post it for review.

Yes, perhaps. The problem is that, for now, these function calls are meant to be invisible to the user.
They are primarily designed to guide the flow of the conversation. Rather than writing an enourmous prompt full of “IF we have this information THEN ask the user for x otherwise do y”, I’m hoping to keep track of my current “conversation flow” via these functions. After the start event type is set, we can move on to the next topic.

Also, I’m using them to keep track of established facts about the project so that I can provide them as context to later AI conversations. So much further down the line I might be sending messages appended with “The start event is manually triggered”.

I will think about your suggestion, but I think it may be unnatural to encourage the user to be so specific about certain facts. Plus - I still don’t think it should be regurgitating internal function specifications anyway!

Understood, if you can post some full interactions where you have errant behaviour it would be helpful to look at.

Sure. I tried starting a chat with GPT4, default settings, the function described above, and the following system prompt:

Process summary: Ordering a pizza

Guide the user in modelling this business process as a process diagram.  The goal is to determine whether this process is started manually or by a trigger.

I don’t have any other messages, I’m letting the assistent start the interaction. I ran this four times, and three out of the four times I get this issue, while the final result did call the function as expected. Here are the incorrect outputs:

In the case of ordering a pizza, the process is usually started manually. This is because the customer decides when they want to order a pizza, initiates the order, and selects their desired pizza and additional items. Therefore, we can record the start event as a manual event. 

Here is the code to set the start event type:

\```ts
functions.set_start_event_type({ "event_type": "manual" });
\```

Another:

Sure, let's break down the process of ordering a pizza:

1. The customer decides to order a pizza (This could be considered the start of the process)
2. The customer chooses the type of pizza they want
3. The customer places the order
4. The pizza shop receives the order
5. The pizza shop prepares the pizza
6. The pizza shop delivers the pizza
7. The customer receives and pays for the pizza

Looking at this process, it begins with the customer's decision to order a pizza. This is a manual event, initiated by the customer. Therefore, we can set the start event type as "manual". 

Here is how you can set it:

\```javascript
functions.set_start_event_type({
  event_type: "manual",
});
\```
This code will record that the start event of the pizza ordering process is a manual event.

And finally this one, which actually corrupts the function call anyway:

To determine whether the pizza ordering process is initiated manually or triggered, let's consider the steps involved. Typically, ordering a pizza can be broken down as follows:

1. Decision to order a pizza: This could be a spontaneous decision, or it could be driven by hunger, a craving, a party, etc. This decision-making process is usually manual since it depends on human intervention.

2. Choosing the pizza: The customer selects the type of pizza they want, including the size, crust, toppings, etc. This step is also manual because it relies on the customer's preferences.

3. Placing the order: The customer completes the order process either online, over the phone, or in person. This step is also manual as it requires the customer's action to complete the order.

4. Making the pizza: The pizza shop prepares the pizza according to the customer's order. This step could be considered as triggered since it starts automatically once the order is placed.

5. Delivery or pickup: The customer either picks up the pizza or it is delivered to them. This step could be either manual or triggered depending on the circumstances.

Considering these steps, we can say that the pizza order process is initially manual (steps 1-3) but then becomes triggered (steps 4-5). However, since the start event is the decision to order a pizza, which is a manual action, we can classify the start event as manual.

Here's the command to set the start event type to manual:

\```
{
  "event_type": "manual"
}
\```

You will use this command to invoke the function `set_start_event_type` under the `functions` namespace.

Note that I never even mention functions in the system message. It decides to include this information of its own accord.

My guess is that the model is just having a hard time understanding the problem, so I guess I need to keep working on both the prompt and the function description. Nonetheless I really don’t want this to be how the user sees such a problem!

You may find these free, short courses by Andy NG and OpenAI staff memebrs very interesting, they discus and give examples of business orientated chat bot solutions, one of which is actually a pizza ordering system.

2 Likes

Maybe some tips from my side:
The main description should be aimed at making the decision whether or not to call the function, not a description of the properties (which can be done in the description of the parameters).
GPT4 is much better at following instructions for function calling in my experience.
It might be possible to have one small intermediate step to solve this issue, which would be an initial (required by name) function call to decide which function to call, e.g. a “function_decision with param of type enum with the “function names” and one option set to “no function is relevant”. The answer to param can be used to call the actual function or to generate a response.

Thanks for the advice. I think you’re right about the description helping to aid the decision about whether to call. I’m also wondering if functions shouldn’t be though of as something that the AI triggers, but something that to user triggers. So telling the AI “please call the function when x occurs” isn’t helpful, but writing the description to capture something the user did could be better.

Although in any case, I’m still not sure why it’s regurgitating the function descriptions to the user. I don’t see why a poor description should cause that.