How to re-create ChatGPT's ability to output text before function calls?

Hey people I’m writing a chatbot, but I’m having trouble to figure out how ChatGPT does it where rather than “just” a function call it will produce some text as well like “Sure I can help with that, …” → ((function call)) within the same message~

With the API if a user asks for a task it directly makes the function call.

The typical flow is :

  1. User makes a query
  2. Model reviews the query and decides to use a function call to answer it
  3. Model returns function stop reason along with function parameters in JSON
  4. User code executes function call and generates appropriate content
  5. Newly generated context and the function call data is inserted into the message structure so that the model now has the requested function call response
  6. Model generates a text reply that incorporates the function data to full answer the original query
  7. User is shown the new model response

So this is 2 API calls, 1 to process the function call and another to pass the result of that function call back to the model to then be turned into a proper reply to the user.

this makes sense for making the model generate after the call, but in chatgpt I see it output text before the function call is made (?)

That is a plugin and not a function call,. the two are separate entities, you have posted this question in the OpenAI developer forum under the GPT-4 and API topic headers, did you wish to ask about plugins instead?

1 Like

I was under the impression that plugins leverage function calling, but if you’re telling me they don’t and it’s impossible then I will have to give up

I am sure under the hood this is the case, but the API does not expose plugin features and plugins are ChatGPT functionality extensions and so have limited scope. I guess it depends what you are proposing to do.

yeah I’m just wondering what the trick/pattern is, the model already knows it’s about to call a function when it begins to output text before the function call~

I was hoping there was a straight forward way but,

I think I could puzzle something together where, rather than following through with the function call I first make a special prompt to ask the AI to write something small in preparation for the function call (?)

but it would be really useful to know if someone has experience with this pattern and maybe know a good way to go about doing it

I think internally there are multiple calls going on, and you get to see the streaming result of one of those additional calls when you click on “show work”, so to the user it looks like something special is happening when in reality I think it’s just an additional API call being made “for free”.

It is also going to help if your “plugin-function” is limited in scope as you will typically know what prerequisites are needed and so most of it can be pre computed/filled in.

For text after a function call, just making an extra request at the end works perfectly, e.g. if my function outputs a url and I then make a second request, the bot will go “Here’s the url from the function call: http…”.

But I’m still a bit puzzled on ChatGPT’s behaviour before the call itself, because it writes as if it hasn’t made the function call yet but knows that it’s going to do it in the next request,

without something messy like what I mentioned in the reply before of having a special prompt for this (like “You’re about to call ‘function_name’, write something to prepare the user”)

Yep, I think the missing lynchpin for all of this is the ability to return both a function call response and a text response. This is basically disabled in the API call,. but that need not be the case for OpenAI’s internal workings, forced separation of functionality? not sure, but to my mind that seems to be the limiting factor.

1 Like

“disable”? No, more like “techniques not documented”

  "message": {
    "role": "assistant",
    "content": "Sure! Here's a description of each task:\n\n1. Python Calculate 200th Digit of Pi:\nTo calculate the 200th digit of Pi, you can use a mathematical formula or a library like `mpmath` in Python. One popular algorithm for calculating Pi is the Bailey-Borwein-Plouffe (BBP) formula. It allows you to calculate individual digits of Pi without calculating all the preceding digits. Here's an example of how you can calculate the 200th digit of Pi using the BBP formula:\n\n```python\nfrom mpmath import mp\n\n# Set the precision to a sufficient number of decimal places\nmp.dps = 200\n\n# Calculate the 200th digit of Pi\npi = mp.pi\ndigit_200 = int(str(pi)[201])\n\nprint(digit_200)\n```\n\n2. Python Fibonacci to 100:\nThe Fibonacci sequence is a series of numbers in which each number is the sum of the two preceding ones. To generate the Fibonacci sequence up to the 100th number, you can use a loop or recursion. Here's an example using a loop:\n\n```python\ndef fibonacci(n):\n    fib_sequence = [0, 1]  # Initialize the sequence with the first two numbers\n\n    for i in range(2, n):\n        fib_sequence.append(fib_sequence[i-1] + fib_sequence[i-2])\n\n    return fib_sequence\n\nfibonacci_sequence = fibonacci(100)\nprint(fibonacci_sequence)\n```\n\n3. Python sin(3388 radians):\nTo calculate the sine of an angle in radians, you can use the `math` module in Python. The `math.sin()` function takes an angle in radians and returns its sine value. Here's an example of calculating the sine of 3388 radians:\n\n```python\nimport math\n\nangle_radians = 3388\nsin_value = math.sin(angle_radians)\n\nprint(sin_value)\n```\n\nNow, let's perform each task one by one.",
    "function_call": {
      "name": "python",
      "arguments": "{\n  \"program\": \"from mpmath import mp\\n\\nmp.dps = 200\\n\\npi = mp.pi\\ndigit_200 = int(str(pi)[201])\\ndigit_200\"\n}"
    }
  },
  "finish_reason": "function_call"
}

],

(and better quality when I remember to throw in a top_p=0.3):

  "model": "gpt-3.5-turbo-0613",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "1. To calculate the 200th digit of pi using Python, we can utilize the `mpmath` library, which provides a high-precision arithmetic module. Here's how you can perform this calculation:\n\n```python\nfrom mpmath import mp\n\n# Set the precision to a sufficient number of decimal places\nmp.dps = 200\n\n# Calculate pi to the desired precision\npi = mp.pi\n\n# Extract the 200th digit\ndigit = int(str(pi)[201])\n\n# Print the result\nprint(digit)\n```\n\n2. To generate the Fibonacci sequence up to the 100th number using Python, you can use a simple iterative approach. Here's an example:\n\n```python\ndef fibonacci(n):\n    fib_sequence = [0, 1]  # Initialize the sequence with the first two numbers\n\n    for i in range(2, n):\n        fib_sequence.append(fib_sequence[i-1] + fib_sequence[i-2])\n\n    return fib_sequence\n\nfibonacci_sequence = fibonacci(100)\nprint(fibonacci_sequence)\n```\n\n3. To calculate the sine of 3388 radians using Python, you can utilize the `math` module, which provides various mathematical functions. Here's an example:\n\n```python\nimport math\n\nangle = 3388  # Angle in radians\nsine = math.sin(angle)\n\nprint(sine)\n```\n\nNow, let's perform these calculations one by one.",
        "function_call": {
          "name": "python",
          "arguments": "# Calculate the 200th digit of pi\nfrom mpmath import mp\n\n# Set the precision to a sufficient number of decimal places\nmp.dps = 200\n\n# Calculate pi to the desired precision\npi = mp.pi\n\n# Extract the 200th digit\ndigit = int(str(pi)[201])\n\ndigit"
        }
      },
      "finish_reason": "function_call"
    }
1 Like

:smile: You know me, if it’s not in the docs it’s not there.

If I were building a product in the next 3 weeks I think I’d find other tasks to do until DevDay, I think there will be lots of interesting things announced and released then that may impact projects.

1 Like

You didn’t have to do anything special for this?

foreach ($api_response->choices as $choice) {
	$api_message = $choice->message;

	if (isset($api_message->content)) {
		$this->responder->message($api_message->content, $api_response->usage->completionTokens);
	}
	if (isset($message->functionCall)) {
		$this->responder->functionCall(json_encode($api_message->toArray()), $api_response->usage->completionTokens);
		$this->handleRouter($api_message->functionCall, $message, $api_response->usage->completionTokens);
	}
}

This is what I’m doing, but I have so far not seen any content message when it’s decided to use a function call~ possibly a bug in the library I’m using then (?)

Oh, yes, I had to do “special”. I get the function calling AI model by using an unrelated function. Then I completely usurp the function mechanism by providing my own system message with injection in the same form as how the AI receives language from an API function. However I don’t describe a json specification as the API’s typescript facsimile limits you to, so I also don’t get json back (except for the first fluke where the pretuning caused hallucinated json).

However, you can just be persistent in your system description and post-prompt injection that the AI must present proposed actions to the user for approval before invoking any API function tool.

The question for the developer day: “why have you seemingly specifically targeted and degraded the quality of the 3.5-turbo models so they can’t follow sophisticated system instructions any more? Is it specifically so that each single feature for which ChatGPT Plus requires GPT-4 can no longer be easily replicated by developers using a competent GPT-3.5? When do you expect to apologize, recant, and give back a checkpoint model actually from June?

Ok. I also build&inject my own typescript definitions, but I get properly formatted function calls back from the model…
I’m also a bit confused because you showed what looks like a proper response (?)

It also just occurred to me that I can simply add an extra property to all my functions like a “message_to_user_before_function_execute” type string~ (though that wouldn’t emulate chatgpt’s behaviour perfectly)

1 Like

Use streaming and you will find that sometimes GPT responds with both assistant content and function_call in the same request. You could probably prompt engineer it into doing so more consistently with some clever system messages but I see this happen somewhat arbitrarily. Of course, once you get the function_call and process it then it requires another request to pump in the output and get the final responses.

Wait, this really confuses me, because I had thought this to be true and it was you who showed me it wasn’t here. Did something change or am I misunderstanding?

My original point was that the current generation of LLM’s take tokens as input and generate tokens as output, they do not perform classical computational logic on the input, it’s not a traditional programming language, and if you primarily consider the output to be be words then you have a better understanding of what’s going on. Functions formalise this output to be in JSON format, but it’s output generated by an language model and not a compiler.

1 Like

I’m not following. In this thread, you wrote:

My point is that this does not appear to be disabled in the API, and that you even showed in the other thread I linked to how both a function call response and a text response can be returned together. Not sure what I’m missing here.

I think you may have confused me with someone else, I checked the link you provided and at no point do I demonstrate obtaining both function calls and text.

There are some non standard specific prompts that mess with standard functionality to achieve both, but they are not approved, supported or recommended.