Function-call included in content, not in `function_call` response param

Since late yesterday, I’ve noticed that gpt-4-0613 is returning the function_call within the mention content frequently (and NOT the function_call) response param. Example response in message content:

  "functions.aws_command": {
    "question": "What is the CPU utilization of my EC2 instances?"

The function name (aws_command) and the function param (question) are in the correct format. I’ve tried multiple prompt variations without success. I did not see this last week. Curious if anyone else has seen this?

Can you show your code where you define the function? Could well be a incorrect function definition text causing it.


functions : [
      "name": "aws_command",
      "description": "Get information from your AWS account by executing a single AWS CLI command. For example: list of EC2 instances, list of S3 buckets, list of CloudWatch log groups, etc.",
      "parameters": {
        "type": "object",
        "properties": {
          "question": {
            "type": "string",
            "description": "A natural language description of the question to answer by running AWS CLI command(s)."
          "context": {
            "type": "string",
            "description": "Any additional context that the AWS CLI may need to answer the question, like instance ids, bucket names, etc."
        "required": ["question"]

PS - it seems like this only happens if I include the following in my prompt:

Before providing a final answer or executing a function call, think about a plan to answer the question.

the function description says “A natural language description…” that is what you are telling the AI it needs to create for the function to work. You don’t want a natural language description, you want an AWS command, from a natural language description, no?

1 Like

That is the description for the question argument, not the function. Given a natural language description of a question, the function (which is actually a separate prompt that gets executed) generates an AWS CLI command.

Anyway, after removing the line I mentioned above, everything has worked as expected.

Yes, but it’s still in the function definition, just try omitting that line and see what the results are like.

i think we’re having the same issue

Did you find a solution to this yet? Any certainties of a cause? Are you asking your bot to ‘reason’?

Odd … I am building an agent application, so I am asking it to reason. After changing the line below in my prompt, I haven’t seen the issue (probably 100+ API calls):

Before providing a final answer or executing a function call, think about a plan to answer the question.


When sharing a plan or observation, limit to 2 sentences, manager-level description (no command names or references to arguments).

Removing the part that its need to reason helped for me too, clean calls in the last 200+ api calls.
I suppose its already reasoning a bit thanks to the function calls.
I still advice it to use function calls though

I’m seeing this exact same behavior on very basic prompts that have no instructions for “reasoning” as mentioned earlier in the thread.

Curious if anyone else sees this issue and has a good way of working around it.

Welcome to the forum, yuvadm.

If you have examples of this, please create a new post and if you can, please list some example prompts and responses.

Thank you.

Thanks @Foxabilo! Will clean up my prompt and post a reproducible example shortly.

don’t know if that is very reasonable but putting this into the system message helped:

Never print the function_call in the chat. Always use the function_call attribute to call a function.

The AI doesn’t “use the function_call attribute” to put a function call in the right place. The AI generates language of a function in a particular style, and the backend is supposed to pick that up and encapsulate it in the 'function call" message content.

It needs to be the only line, or the last line separated by carriage returns.

One of the best ways to break function calling is by referring to the functions themself. The AI must decide on its own that a function is appropriate to be used to answer a question. If the user input could directly say “pass these parameters to the write_database function”, there would be chaos.

For all the single output below, I’ve mind-bent the AI into hallucinating things that aren’t the actual form of function calls, simply by telling it the steps it should do: (as shown earlier in this topic)

Sure! Here’s an explanation of the API calls AI will use for each function, followed by the API function calls for all three functions:

  1. Python Calculation to Calculate the 200th Digit of Pi:

    • AI will use the python API call to execute a Python script that calculates the 200th digit of Pi using a mathematical algorithm.
  2. Python Calculation to Generate Fibonacci Sequence up to 100:

    • AI will use the python API call to execute a Python script that generates the Fibonacci sequence up to the 100th number.
  3. Python Calculation to Calculate the Sine of 3388 Radians:

    • AI will use the python API call to execute a Python script that calculates the sine of 3388 radians using the math library in Python.

Now, let’s make the API function calls for all three functions:

# Calculate the 200th digit of Pi
pi_script = '''
import math

pi = str(math.pi)
digit = pi[201]

# Generate Fibonacci sequence up to 100
fibonacci_script = '''
def fibonacci(n):
    if n <= 0:
        return []
    elif n == 1:
        return [0]
    elif n == 2:
        return [0, 1]
        sequence = [0, 1]
        while len(sequence) < n:
            next_number = sequence[-1] + sequence[-2]
        return sequence

# Calculate the sine of 3388 radians
sine_script = '''
import math

sine = math.sin(3388)

# API function calls
  {"python": {"program": pi_script}},
  {"python": {"program": fibonacci_script}},
  {"python": {"program": sine_script}}

Please note that the response may take a few seconds as the calculations are performed."

1 Like