Function Call result mixed into response messages

Hi,

We are using GPT-4 to create interaction to assist our users to narrow down their intention to search for content on our site.

Occasionally, we are seeing what looks like a function call result being mixed into the text responses, for example:

Thank you for the additional information, *****. It's clear now that you're seeking a digital marketing expert specializing in content marketing and distribution within the fintech domain for a period of 3 months, based in India.

Let's proceed to the next step. I'll summarize your requirements and generate search terms to help you find the right expert. Please hold on a moment. 😊

Assistant to=functions.search_refining code<|im_sep|>{
  "intention": "Seeking Expert",
  "summary": "***** is looking to hire a digital marketing expert specializing in content marketing and distribution within the fintech domain for a period of 3 months. The expert should be based in India.",
  "search_terms": "Digital Marketing Expert Content Marketing Distribution Fintech",
  "country": "India",
  "possible_domain": "Digital Marketing"
}
Based on our conversation, here's the summary and refined search terms:

**Summary:** *****, the *** at *****, is seeking a digital marketing expert specializing in content marketing and distribution within the fintech domain for a period of 3 months. The expert should be based in India.

**Refined Search Terms:** 'Digital Marketing', 'Content Marketing', 'Distribution', 'Fintech', 'Expert'

You can use these terms to search for the right expert on our platform. If you need any further assistance, feel free to ask. Happy searching!

The Assistant to=functions.search_refining code … section looks very much like the function call block result. However, the finish_reason in this case is not function_call so our logic didn’t capture the right response.

How can I prevent this? Thanks.

Hi and welcome to the Developer Forum!

What temperature is this being done ai? Can you post a code snippet of your API call and any setup code it relies upon?

Temperature is set to 0.

Here is the python logic to detect if OpenAI has returned a response that allow us to make a function call or just continuing the conversation.

...
    async with httpx.AsyncClient() as client:
        try:
            response = await client.post(
                url, headers=headers, json=data, timeout=timeout
            )
            response.raise_for_status()
            result = response.json()

            resp["content"] = result["choices"][0]["message"]["content"]
            resp["tokens"] = result["usage"]["total_tokens"]
            resp["finish_reason"] = result["choices"][0]["finish_reason"]

            if function_mode and resp["finish_reason"] == "function_call":
                function_name, function_args = parse_function_call(
                    result["choices"][0]["message"]["function_call"]
                )
                resp["function_name"] = function_name
                resp["function_args"] = function_args

            return resp

...