Calling a function when chat wants a user to clarify his question

I want the chat to reply with a function call if it requires a user to clarify his request or if any follow-up input is expected. I am building the voice chat app and need to know whether to continue listening for the user input or if the conversation is finished.
I added a function and described it as “Call this function when you expect a user to clarify his request.” I played with the description, trying to come up with the best formula, but I could hardly force the chat to call the function when expected. Sometimes, it is called, and sometimes, it is not.

I use non-EN language when communicating with the model (be-BY).

Here’s the function definition:

{
  "name": "expectInput",
  "parameters": {
    "type": "object",
    "properties": {
      "reply": {
        "type": "string",
        "description": "the question that you want me to clarify"
      }
    },
    "required": [
      "reply"
    ]
  },
  "description": "Call this function if your reply expects me to respond to you"
}

When testing it in the playground via Assistant, the function is never called with the answer of chatGPT, but if I rerun the assistant on the conversation - it calls the function after his reply:

Me: ask me anything. [press Run btn]
Chat: how are you today?
[pressing Run again]
Chat: [function call appears]

Does anyone know the recipe for this problem?

1 Like

Welcome to the dev forum @yan.lobau

How does your current function’s json spec look like?

Sure, updated the initial post.

A general rule for instructing the model whether or not to call a function is to do that in system prompt (instructions for Assistant), whereas description in the function definition is better suited to tell the model how to call the function.

2 Likes

Well, I did that. In some cases, it works as expected. In the simplest case, when I ask, “How are you?” it replies, “I’m ok. And how are you?” without calling a function.
It is probably worth mentioning that I use non-EN language when communicating with the model (be-BY).

I imagine that it would be easier if you designed your workflow the other way around: Every message by the assistant counts as a inquiry for further details from the user, and when the task is done, the assistant must call a special done/submit function. GPT models are inherently conversing, not operating. Otherwise, maybe you can use token biases for special tokens (not sure which) to increase the likeliness of calling your function?

1 Like