Assistant not using function response

When my assistant gets the output from a function with long text it doesn’t use that output to response to the user. Instead it makes reference to it like if the final user could also see the response of the function when it is actually internal. Here’s an example:

{
    {
      "role": "assistant",
      "function_call": {
        "name": "answer_question",
        "arguments": {
          "question": "what happened in catalonia during ww2"
        }
      },
      "content": null
    },
    {
      "role": "function",
      "name": "answer_question",
      "content": "During World War II, Catalonia was part of Spain under the rule of Francisco Franco. Although Spain declared neutrality, Catalonia was affected by the conflict. Italy and Germany had some interest in Catalonia before the war, but their attempts to establish a fascist movement failed. Barcelona was bombed by Italian planes supporting the Nationalist side in the Spanish Civil War. Catalan individuals, such as Joan Pujol and Josep Trueta, played roles in the Allied side, with Pujol acting as a double agent and Trueta organizing medical services. Some Catalans also fought on the Soviet side in the Eastern Front."
    },
    {
      "role": "assistant",
      "content": "Thank you for your question about Catalonia during WWII. Now, can you tell me the time period during which World War II took place?"
    }
}

Code snippet to reproduce:

import openai
import json
​
openai.api_key = "YOUR_API_KEY"
​
​
# Example dummy function hard coded to return the same answer to the example question
def answer_question(question):
    """Get the answer to the student's question"""
    answer = "During World War II, Catalonia was part of Spain under the rule of Francisco Franco. Although Spain declared neutrality, Catalonia was affected by the conflict. Italy and Germany had some interest in Catalonia before the war, but their attempts to establish a fascist movement failed. Barcelona was bombed by Italian planes supporting the Nationalist side in the Spanish Civil War. Catalan individuals, such as Joan Pujol and Josep Trueta, played roles in the Allied side, with Pujol acting as a double agent and Trueta organizing medical services. Some Catalans also fought on the Soviet side in the Eastern Front."
    return answer
​
​
def run_conversation():
    messages = [
        {
            "role": "system",
            "content": "I want you to act as a history teacher who has to ask some questions to his students.\nQuestions: \n- When did WWII take place?\n- Which countries were the principal belligerents?\nYour primary focus is asking the provided questions but if the student asks any history questions, you will try to answer them using the provided functions.\n"
        },
        {
            "role": "assistant",
            "content": "Can I ask you some history questions?"
        },
        {
            "role": "user",
            "content": "what happened in Catalonia during WW2"
        }
    ]
    functions = [
       {
           "name": "answer_question",
           "description": "Answers a question from the student that is outside the context of the agent's prompt",
           "parameters": {
               "type": "object",
               "properties": {
                   "question": {
                       "description": "The question the student is asking",
                       "type": "string"
                   }
               },
               "required": ["question"]
           }
       }
   ]
    response = openai.ChatCompletion.create(
        model="gpt-4",
        messages=messages,
        functions=functions,
        function_call="auto",
    )
    response_message = response["choices"][0]["message"]
​
    if response_message.get("function_call"):
        available_functions = {
            "answer_question": answer_question,
        }
        function_name = response_message["function_call"]["name"]
        function_to_call = available_functions[function_name]
        function_args = json.loads(response_message["function_call"]["arguments"])
        function_response = function_to_call(
            question=function_args.get("question"),
        )
​
        messages.append(response_message)
        messages.append(
            {
                "role": "function",
                "name": function_name,
                "content": function_response,
            }
        )
        second_response = openai.ChatCompletion.create(
            model="gpt-4",
            messages=messages,
        )
        return second_response
​
​
print(run_conversation())

What you are even attempting to do cannot be determined. You want to make a chatbot that tests users? You want to produce a quiz? You want to evaluate the abilities of a function to provide answers?

You’ve confused the AI by putting a backwards system prompt and nonsensical AI response in your request.

Bad system message:

I want you to act as a history teacher who has to ask some questions to his students.
Questions:

  • When did WWII take place?
  • Which countries were the principal belligerents?
    Your primary focus is asking the provided questions but if the student asks any history questions, you will try to answer them using the provided functions.

Bad assistant message that would appear to be from the AI:

Can I ask you some history questions?

Then the user asks a question:

what happened in Catalonia during WW2

That is not how a chatbot is programmed or chats.



You should offer a much better function name and function description of what that API tool actually does, and the method it uses to retrieve augmented data. This will ensure the AI even knows when to use it.


Then there is a specific format for the additional roles inserted on an API return, which again should include all functions.

So it is a prompt issue? Or a better description of the function would fix it? My goal is to have an agent that asks some questions to the user but it must be capable of answering user’s questions (by calling a function) whenever the users asks anything.

Also, why is bad to introduce the first agent message? I can understand that it messes up when the user directly asks a question but it is something that can happen and the agent should be capable of handling that case. I feel like it knows when to use the function because it is calling it when needed but it is not using the response of it.

And regarding how I added the additional role of the function to the conversation, I’ve actually done exactly the same you write in the linked comment.

Imagine I am a chatbot user, and I say hello to the chatbot. What are you expecting that the AI will do, how will it behave, what questions will it answer?

How does your function work? What kind of backend powers its data augmentation and what does it return.

Knowing that, I might be able to help you craft a better system prompt and function that is called appropriately.

If the user says “hello” then the agent should answer her with a chat response but should try to get the answer of the first question, so something like “Hi nice to meet you! I want you to answer some questions. Do you know when WWII started?”

But again, if the user instead of answering the agent’s questions, asks a question herself, the agent should be able to respond. The function basically searches for the answer of the question with embedding similarity. Once the user’s question is answered the agent should ask again when WWII started until it gets a response from the user.

Hi! I would like to ask how to change this line of code after recent openai updates?
I mean after changing

openai.ChatCompletion.create(

to

openai.chat.completions.create(

I’ve got this error:
‘NoneType’ object has no attribute ‘get’

I would appreciate your help!

Have you found a solution? I’m facing the same problem