We find it very hard to prompt the model to use function calls as it sometimes uncontrollably returns a normal response that looks like a function call instead of the function call itself. Is there a guaranteed way to make the model solely return function calls on certain prompts?
Remove system prompt. Also chat history. Only include functions definition and user message.
instead of using
function_call="auto",
use
function_call={"name": "function_name"},
This works if you have a specific function you want it to call. What if you have several functions, and you want it to pick one of them, but not to return a normal response?
Something like function_call="force"
or function_call="always"
would be very helpful for this.
You can try use system message to tell chatGPT to only use function calls like this:
Only use the functions you have been provided with
Sadly there is no guaranteed way of calling one of multiple defined functions.
As mentioned, you can select a specific function via function_call={"name": "function_name"}
and you can do all kinds of prompt engineering to increase your chances. For example you could set n>1
and hope that one of the choices is a function call, or you could follow up a text response by saying something like Please call a function.
and then (if it works) remove the text response and the suggestion from the message queue and only forward the function call.
I agree that there should be a function_call="always"
parameter.
It is still the AI that decides whether to generate function-calling language.
There is no “forcing” when someone uses jailbreak language to discard all prompting and multi-shot and make it bark like a dog.
Same problem here.
Want a toggle that is:
function_call: 'always'
I can make the same call 5 times and get a standard response 2 out of 5 times. Even with prompt like
{ "role": "system", "content": "Always make a function call." },
And if i add a message history that is formatted in the typical way with alternating turns between user/assistant, it almost always response without the function call.
Oh hey, i had an idea when writing that last comment that i just tried and seems to be working. It’s to format the message history so that the responses from assistant are in the format of function responses. Here’s my function to prep the message history. The line i commented out was how I was doing it before (which almost always results in not returning the function calls). I haven’t got this method to fail yet.
formatMessageHistoryForGPT() {
//if there is no message history, return false
if (this.message_history.length == 0) {
return false;
}
let message_history = [];
for (let i=this.message_history.length-1; i>=0; i--) {
let message = this.message_history[i];
let user_message = {"role": "user", "content": message.msg_from_customer};
//let assistant_message = {"role": "assistant", "content": message.response_to_customer};
let assistant_message = {
role: 'assistant',
content: null,
function_call: {
name: 'respond_to_customer',
arguments: '{\n' +
' "response": "' + message.response_to_customer + '",\n' +
' "playbook_state": "' + message.playbook_state + '"\n' +
'}'
}
}
//add only if not null
if (message.msg_from_customer != null) message_history.push(user_message);
if (message.response_to_customer != null) message_history.push(assistant_message);
}
return message_history;
}
I managed to achieve this by asking it to NOT respond to the user and to only use function calls.
I cant remember my exact prompt but I really had to drill to point home
“DO NOT respond to the user under ANY circumstances”… that kind of thing
I also think I made this prompt a system prompt and made it the most recent message in the list
It worked on GPT3.5 too
Hope that helps
Hi my approach to never have the model respond to the user was to do a previous call to
“gpt-3.5-turbo-instruct” asking it to select the function to call, then I do a second call to “gpt-3.5-turbo” using function_call={ “name”:“function returned from instruct”} forcing it to execute this function
Here is the code I used for the instruct call
model = OpenAI(model="gpt-3.5-turbo-instruct", temperature=0)
prompt = PromptTemplate(
input_variables=["input"],
template="""Welcome to Pizza Dream! As the AI assistant, your role is to assist customers in placing their orders.
Given the following conversation:
------------------------
Customer: Hello, good evening
Assistant: Hello! Good evening! How can I help you today?
Customer: I would like a mozzarella pizza.
Assistant: Sure, I've noted down a mozzarella pizza. Anything else?
Customer: {input}
------------------------
Based on the customer's last request and the function's capabilities listed below, determine which function to return:
[{{"function": "PizzaBeverageDessertOrder", "capabilities":"Call this when receiving new order, changing orders or making special requests to pizzas, beverages or desserts"}},
{{"function": "AddressOrder", "capabilities": "Call this with the customer address to delivery his order"}},
{{"function": "QuestionsAboutPizzas", "capabilities": "Answer questions about pizza toppings, beverages and desserts"}},
{{"function": "QuestionsOutOfContext", "capabilities": "Answer any question not in the context of ordering at the pizzeria "}}]
Provide your selected function in JSON format and the reason you choose that function in a key "reason":""")
chain = prompt | model
print(chain.invoke({"input":"How do I fly a rocket"}))
The result of this is
{
"function": "QuestionsOutOfContext",
"reason": "The customer's last request is not related to placing an order at the pizzeria, so the function to handle questions out of context would be the most appropriate."
}
Then I would call “gpt-3.5-turbo” with function_call={“name”: “QuestionsOutOfContext”}
Hope that helps too
That would be helpful. My use case is that the function works correctly the first time it is called, but on subsequent calls, it seems to use memory data rather than forcing a fresh function call. The results from my function might have changed, but these changes are ignored because the assistant isn’t recognizing that it needs to perform the action again for the same query.
Hi!
This is an old topic, and there is already a built-in solution for the original question. Workarounds are no longer needed.
https://platform.openai.com/docs/guides/function-calling/function-calling-behavior
I suggest opening a new topic for any new questions related to function-calling behavior.