What is the best way to format the output of a function in the conversation?

I have a function that returns a formatted “values card” (something with a bolded title and a description).

I want to show this to the user in exactly the way that it was returned by the function, but when calling the API with the function response, to generate a text message for the user-facing conversation:

  const res = await openai.createChatCompletion({
    model: "gpt-3.5-turbo-0613",
    messages: [
      ...messages,
      {
        role: "function",
        name: func.name,
        content: result, // the formatted values card
      },
    ],
    temperature: 0.7,
    functions,
    stream: true,
  })

The resulting stream botches the formatting and writes something in free-text.


What is the best way to steer the way that the function result is formatted?

I’ve tried:

  • Be explicit of how the result should be handled in the function description.
  • Be explicit of how the result should be handled in the system prompt.

The return value of the function is meant to inform the AI, which is still going to answer the way it wants.

You’ll have use the function role message just like you’d use the user message to make the AI comply, by putting prompting techniques there. Like:

  • The AI will repeat back every character of this phrase verbatim when answering, without alteration, and while preserving the formatting: “”“{result}”“”
1 Like

I made a test and created a sample function:

{
            name: 'get_country_capital', 
            description: 'Get the capital of the given country', 
            parameters: {
                type: 'object', 
                properties: {
                    country: {
                        type: 'string', 
                        description: 'Country name, e.g. Finland, Egypt'
                    },
                    capital: {
                        type: 'string', 
                        description: 'Capital city, e.g. 2023-06-28, Helsinki, Cairo'
                    }
                }, 
                required: ['country', 'capital']
            }
        }

I call Chat API with function calling

const messages = [ { role: 'user', content: 'What is the capital of Moldova?' }]

const response = await openai.createChatCompletion({
    model = 'gpt-3.5-turbo-0613',
    temperature = 0,
    messages,
    functions,
})

I get response

{
  role: 'assistant',
  content: null,
  function_call: {
    name: 'get_country_capital',
    arguments: '{\n  "country": "Moldova",\n  "capital": ""\n}'
  }
}

I call the Chat API again to summarize, note the system prompt

const messages = [
  {
    role: 'system',
    content: 'Show the raw result of function calling in JSON format.'
  },
  { role: 'user', content: 'What is the capital of Moldova?' },
  {
    role: 'assistant',
    content: null,
    function_call: {
      name: 'get_country_capital',
      arguments: '{\n  "country": "Moldova",\n  "capital": ""\n}'
    }
  },
  {
    role: 'function',
    name: 'get_country_capital',
    content: '{\n  "country": "Moldova",\n  "capital": ""\n}'
  }
]

const response = await openai.createChatCompletion({
    model = 'gpt-3.5-turbo-0613',
    temperature = 0.7,
    messages,
    functions,
})

I get the final result

{
  role: 'assistant',
  content: '{\n  "country": "Moldova",\n  "capital": ""\n}'
}

The result of function call does not contain the answer I need (e.g. Chisinau) but if I append it in the last Chat API call, it will turn to conversational response

{ role: 'assistant', content: 'The capital of Moldova is Chisinau.' }

If I just send the result as is, it works as you want it.

But if you want to send the result as is to the user, why even call Chat API again?

The reason for wrapping the result in a regular chat completion call is to make the whole response more conversational, and, as I’ve understood it, having the { role: “function” } message in the history is how the flow is meant to work?

Ideally, I want something like this:

“”"
Thanks for articulating your value. Here is the value card that was generated for you:

<the values card, as formatted by the function response>

Does this resonate with you?
“”"

The output language the AI produced is not what you show. Are you trainiing it to not call functions correctly?

(hint, exclude the ### below that was used to break API detection)

      "message": {
        "role": "assistant",
        "content": "###function_name({\n\"property1\": \"1337\"\n})",
        "function_call": {
          "name": "function_name",
          "arguments": "{\n  \"property1\": \"1337\"\n}"
        }

Then, the role that you return with the answer is not like you show. Rather this is the return value that informs the AI of the function it called and the answer it received.


            {
            "role": "function",
            "name": "python",
            "content": "1.3138602823312367e-08"
            }

Finally, you’re telling the AI that the answer to the question is the same as what it asked?

I think from your examples, the question you did not articulate clearly that you are trying to find an answer for is “how should I create and append role messages in the conversation history during and after function calls”.