How do I convince the model to BOTH write a message and call a function?

I want the gpt4 (0125) model to BOTH write a message to the user and call a function. I tried every prompting technique I could think of, but it still only includes the message around 30% of the time - otherwise it returns the function call but no messages.

Any idea what I can do? The related part of the prompt is below:

When calling any of these functions, you ALWAYS also start by writing one sentence to the user about why you are using it. This helps you keep track of your reasoning, and helps the user follow along as you proceed. ALWAYS write additional explanation when using a tool or function. This is very important to me. You will be tipped extra if you do it. My grandmother will die if you donā€™t include explanations. Iā€™m not kidding. Sheā€™s very old and frail. Please help me keep her alive.

Maybe try a one or two-shot example for the model to follow?

How can I do this in the system prompt for the chat api? I canā€™t really give it examples of function call responses, donā€™t know how openai represents them internally?

I would first try it ass a form of pseudocode. Basically give an example of a user input, a message to the user, then something like, ā€œ<beep boop> now Iā€™m calling this function with these parameters. <modem hiss> now Iā€™m getting the resultsā€¦ Based on the previous function call the answer to your question isā€¦ā€

Hmm, will try that, but I fear it will then just do the whole thing as a user message (with text saying ā€œIā€™m calling function xxxā€)ā€¦ will report back!

And it very well might. You might need to preface the examples with an instruction to replace that but with actual function calls, but itā€™s worth a shot.

Declare the function by adding an argument which would be, in text, the response sentence to be given to the user.
Then retrieve it :slight_smile:

1 Like

This is a good idea, and would probably work for a static message.

I didnā€™t suggest it because the OPā€™s original instruction was,

Which I assume will vary quite a bit, even for the same function call.

But, this is definitely an idea worthy of trying.

Thatā€™s plan B - just a lot harder to implement (and get to work with streaming responses) with our current framework/codebase, but if all else fails, it should indeed guarantee that we get the ā€œthoughtā€ (as much as you can guarantee anything with LLMs)

Oh yes, streaming is more complex ā€¦
Then there would be the double function call (2 functions) by forcing it to first call the thought function and then reinjectingā€¦

I donā€™t think you are using this in the way itā€™s intended.

if you receive a function call you process that, then send the history window and the answer back to the llm to receive a further response which, if not another function call, might be something you then share with the user.

Yes, the goal here is that for multi-hop answers (e.g. the llm chooses a function call, I give it the results, then decides it needs to call another function before answering the user) I want to show progress to the user - otherwise theyā€™re just staring at a blank screen for a long time until we get the final answer. Itā€™s similar to what ChatGPT does with plugins (sometimes).

It also has the added benefit of serving as a ā€œreasoningā€ step which is known to improve llm performance.

1 Like

yeah, the delay on the bigger models is definitely an issue.

I donā€™t find this a problem with 3.5 though ā€¦

@OP - Did you recently start testing this? Your expected behaviour is what we experienced since going to to GPT4 Turbo. But the past few days we have seen a degradation, and beginning yesterday all function calling stopped working.

The model is now only ā€œSIMULATINGā€ function calls and hallucinating reaponses, instead of using calls. Itā€™s very disturbing.

2 Options that you could try to achieve the desired behaviour are

a) Add a required function param like ā€œmessageā€ that the model has to write. You can send the message param to the user as message and use the other params to call the actual function

b) Use Few Shot examples: Another post stated that you CAN actually send the called functions as example to a model by simply taking the function_calls array from a response where the model did what you want and appending that array in the example message where you want it to be. A 1-Shot History would then look something like this:

[
{"role": "user", "content":  <example user message here> },
 {"role" : "assistant", "content": <example response message here>, "tool_calls": <example function calls array here> }
]
1 Like

I started doing this for the ā€˜reasoningā€™ reason :o)

prompt

Input: <something>

Tasks:
Q1: Do this and that and find blah
Q2: Figure out blah
Q3: With what you found previously, do blah.

function

function parameters:
class Output(BaseModel):
    Q1: str = Field(description="Answer to Q1")
    Q2: str = Field(description="Answer to Q2")
    Q3: str = Field(description="Answer to Q3")

Notes: In my case, I only want Q3, but I want the model to produce Q1 and Q2 to steer it towards the right goal.

In your case, Q1: could be something that you retrieve ā€˜via functionā€™ but throw at your User while you keep doing something (another call?).