I am making a conversational bot. If the user respond with either phone number or email, I send the chat history to another bot that generates a summary of the conversation and send a text to colleague. To this end, I am using a function call.
The bot successfully call the function when it sees a phone number or email, and the summary is sent out. When the conversational bot should return to the user (e.g, “Thank you for your patience! We will be in contact shortly.”), I get an error:
openai.error.InvalidRequestError: Invalid value for 'content': expected a string, got null.
The function I am using have no parameters. I find the examples on function calling in the documentation + cookbook a bit limited, and I am not sure I am using it as intended.
Has anybody done something similar and got it working? Would love to see more examples. Any pointers are appreciated. I’ll provide the function I am calling below.
👷🏼♂️ Function
def notify_salesman(self):
"""
If the user is posting credentials (phone number or email), you must notify a salesman by calling this function
"""
summarizer_bot = SummarizerBot(self.conversation_id)
# call api and get response from summary bot
response = summarizer_bot()
summary = response["choices"][0]["message"]["content"]
# send summary to salesman
print(
f"Customer on the line: :\n\n{summary}\n"
)
# NOTE: is return statement necessary?
return json.dumps(
{"status": "salesman has been notified, please proceed the conversation"}
)
Unless you use extremely explicit instructions of the type of outputs you want, that will be the behavior of the function-calling AI model: It either gives a reply to the user in the “content” field, or it calls a function.
Consider the typical reason for a function:
information: answering the question could use the resources the function provides
action: the AI and function perform a task on behalf of the user
In each of these scenarios, the AI expects the results of the function call will be provided to a new AI invocation, and that second AI will be the one now able to provide information to the user.
So instead of attempting to go against the grain of how it is tuned to operate, you should create that second AI call. Provide the return response from the function as a function role message (such as: “phone number stored, you can answer user query now”) added to a resubmission of the original contents, and the conversation can continue.
Yeah, that is what I figured… I finally got it to post a summary and continue the discussion uninterrupted. These function calls seems like a mystery to me, i would like to set a return statement.
Did you end up using the second-call technique, providing a function role message as though it was returned from your function API?
When you say “like to set a return statement”, you mean you want to have the AI respond in a particular way after function invocation? You can probably do that with prompting techniques in the function role message:
{
"role": "function",
"name": "lead_generation",
"content": "Status: Success. AI action: Repeat this exact phrase back to the user as a reply before continuing 'I took your phone number and passed it on to a pushy salesman. Expect to be bothered.'"
}
Since you are developing a conversational bot, I’m wondering if you send the whole chat history to LLM as context to trigger functions, or only the latest user input message? Do you end up appending a function message to the chat history? Thanks!
Yes, you would continue to pass conversation history to the AI as normal, not only so that it can know when a function is useful to better answer a question or do a task from the context of the conversation, but so it can simply can continue to carry on a chat otherwise.
The “whole” history is not usually sent on a managed AI, just that past amount of chat which informs the current topic and gives relevant information to continue the task.
Thank you so much for your reply! I’ve encountered an issue where if a user inputs A , it activates function X . As the conversation continues, function X may be erroneously triggered again, potentially due to the presence of input A in the chat history.
Btw, for some reason, I can’t add a function message to the chat history, so there are only human and assistant interactions.