The functions message is the new role to put the result of the function call in it. GPT will generate an appropriate answer from this result and the prompt from the user.

1 Like

So you’re saying there’s a bug or lower performance when doing it the way OpenAI recommends? Their documentation and examples don’t use the system message for functions anymore.

1 Like

The function calling is appended to the body of the request and injected into the systems message. It is important to describe what assistant needs to do in the system message. A simple “I’m a helpful and friendly assistant” is not enough.

But I would suggest to treat a function call as a one-shot request in the system message even if you handle a conversation.

1 Like

I’ve noticed that system messages can impact behavior erratically when they are appended subsequent to the user messages. I wonder if the order of appending the function message matters, too.

1 Like

@t.haferlach - I’m going to follow this thread - I’m having problems with multiple calls - are you experiencing this? I will be pushing a simple opensource python wrapper on git soon that allows you to programatically create your functions and pass them to chatGPT so you don’t need to write json objects. Basically this:

# 'If you've used SQLCLient or OracleClient, this is similar.  
# 'Create your function, and then add parameters.
# 'Then you add your function to the "functions" dictionary 
# 'object (a dictionary is used to allow subsequent function lookup for security
# 'to make sure you're allowed to execute the function).
#' The to_json() function will turn the dictionary into a list for chatcompletion consumption
    f = function(name="getNews", description="News API function")
    f.properties.add(property("module",PropertyType.string, "Python Module to Call to run function", True, default="functions.newsapi_org"))
    f.properties.add(property("q",PropertyType.string, "Query to return news stories", True))
    f.properties.add(property("language",PropertyType.string, "Language of News", True, ["en", "es"], default="en"))
    f.properties.add(property("sortBy",PropertyType.string, "Sort By item", False))
    f.properties.add( property("pageSize",PropertyType.integer, "Page Size", True))
    chatFunctions[f.name] = f
    f = function(name="getCurrentDateTime", description="Obtain the current date time in GMT format")
    chatFunctions[f.name] = f
 prompt = "What time is it and find several current news stories on the US economy."
  res = oai.callwithfunction(prompt, chatFunctions.to_json())

This works well, however, the current issue I am having is that the request goes into an infinite loop (it requests the getCurrentDateTime, then requests the getNews successfully, but then requests getCurrentDateTime again, then getNews… - an infinite loop - rather than responding with a status of STOP (request completed).

Any thoughts would be appreciated!

1 Like

I’m seeing the looping behavior happen (especially with the temperature set to low) quite a lot.

I tried to add some sentences to the system prompt to tell it not to go into these kinds of feedback loops. Maybe the “presence penalty” parameter could help.

But as I see it the problem is once it has generated a response to a function call followed by another function call it starts treating it as a pattern that is just continued blindly.

Still trying to figure this part out myself. One could manually tell it to not use a function call after one counted say two function calls.

As a workaround.

@akansel adding the system message at the end makes it completely forget to respond to the last user message regularly. I wouldn’t recommend it.

2 Likes

@t.haferlach - I thought I was onto something when reviewing the returned “content” - the first time I provide the function list, it seems to mirror back the list (my hope was that it knew which functions it wanted). When I ran the subsequent function, the “content” was null.

HOWEVER, I wanted to test that theory, so I created a dummy function “getFooBar” which should not be executed. Unfortunately, when I executed the call with functions appended (including a random one that should not be used), I receive the following content returned instead of a function call request:

 "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "To answer the user's question, the following functions are required:\n\n1. `getCurrentTime`: This function will provide the current time.\n\n2. `getNews`: This function will retrieve several current news stories on the US economy.\n\nPlease provide the necessary information to execute these functions."
      },
      "finish_reason": "stop"
    }

While correct, it didn’t return the “finish_reason” “function_call” that is required. :frowning:

2 Likes

You can change the temperature when ever you want back to your preferences. Only for function calling a low temperature and for content creation a higher temperature.

2 Likes

For anyone else having confusion about executing multiple functions in a chat pipeline, I’ve pushed a repo to Git which will hopefully assist you. Hopefully it may demystify the confusion around functions. This sample executes 3 functions (assuming you have a newsapi api key) and combines the data from all 3 functions.

While it doesn’t appear that I can include a link on this forum, you can find it at the link below :

https:__github.com_seanconnolly2000_openai-functions-wrapper (replace the underscores with slashes).

Hope this helps!

4 Likes

Amazing. Checking. I’ve also realized I may have been making a mistake in that I forgot to include GPTs response in the message history that I pass to it.

2 Likes

@t.haferlach - While the results are better, I still agree with your assertion that results are unreliable.

When I use a “system” role preamble, I can confuse chatGPT. There are times when both part of an answer is stored in the “content” AND a function call is requested (I thought it was supposed to be EITHER content is present OR content is null and a function_call is present):

For the following prompt and 3 calls (getDogName, getCurrentUTCDateTime, getNews), on the second call, part of this request is returned (see response below):
Prompt:

prompt = "Tell me my dogs name, tell me what time is it in PST, 
and give me some news stories about the US Economy."
 "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Your dog's name is Spot. The current time in PST is 4:14 PM.",
        "function_call": {
          "name": "getNews",
          "arguments": "{\n  \"q\": \"US Economy\",\n  \"language\": \"en\",\n  \"pageSize\": 5,\n  \"sortBy\": \"publishedAt\"\n}"
        }
      },
      "finish_reason": "function_call"
    }

Notice the “content” AND the function_call request…

2 Likes

I find this behavior good IMHO.
My use case was that GPT should give me a python function (as content) and then test this python function through a function call for evaluation.
GPT did exactly, what you are describing here. For my use case this is exactly what I wanted and it works. My function call had the following parameters: “functionName”, “args” and “body”.

So what I finally did was, that GPT should generate a python function and print it on screen and in the same time calling my function_call with the generated python function as a request for evaluation. I then evaluated the function for correctness and give back the result, that the function GPT gave me passes the tests.
This is some kind of in-context learning, which helps to improve his answers in the follow-up conversation. I think to implement some self-reflection with this and the tree-of-thought process too.

1 Like

Interesting suggestion, @PriNova - I’ll look to modify my code to capture any “content” that occurs earlier in the function chaining. I’m actually becoming impressed with the function_call ability - I now have 4 functions chained - getCurrentUTCDateTime (live), getDogsName (silly), getNews (live), and getWeather (live). In my project, I now have a prompt which is answered successfully:

"What is my dog's name, tell me what time is it in PST, what
 is the weather like in London, and what sightseeing activities 
would you recommend for London this time of year?  Also please 
give me 5 articles on the US Economy from the last week.

The response is:

Your dog’s name is Rover.

The current time in PST (Pacific Standard Time) is 6:49 PM.

The weather in London is partly cloudy with a temperature of 18.0\u00b0C (64.4\u00b0F).

Here are 5 articles on the US Economy from the last week:

  1. Title: “On-Orbit Satellite Servicing, New Crew Capsules and Artificial Gravity: NASA’s Latest Tech Initiative”
    Description: “A new Blue Origin crewed spacecraft is in the works as part of a NASA collaboration designed to advance the orbital economy, with the space agency lending its expertise to seven different commercial partners.”

  2. Title: “Federal Reserve officials announce pause in US interest-rate hikes”
    Description: “Even with the pause, Fed officials suggest further increases may come depending on how close the economy gets to the 2% inflation target. US Federal Reserve officials have announced a pause in interest-rate hikes, leaving rates at 5% to 5.25% after more than a year of\u2026”

  3. Title: “Forget de-dollarization - Argentina may adopt the greenback as its currency and abandon the peso”
    Description: “If Argentina adopts the greenback, it would become the biggest economy so far to dollarize, according to Bloomberg.”

  4. Title: “The US economy is entering expansion, not recession, and investors fear they are missing out, Fundstrat’s Tom Lee says”
    Description: “The US is entering a phase of economic expansion and not a recession, which means the stock-market rally will become more broad-based, Fundstrat’s Tom Lee has said.”

  5. Title: “China\u2019s economy is way more screwed than anyone thought”
    Description: “Wall Street’s dream of a big Chinese boom, post-COVID reopening, has officially gone bust.”

For sightseeing activities in London this time of year, some recommendations would be visiting the Tower of London, taking a boat tour on the River Thames, exploring the British Museum, walking in Hyde Park, and visiting the Buckingham Palace."

3 Likes

If anyone is interested, I’ve also added Pinecone integration as a function call!

EDIT: OK, now I added sendEmail sendgrid functionality. This is becoming a beast! ChatGPT actually sent an email on my behalf.

4 Likes

One thing I’ve noticed is that if the user message content contains a JSON object, sometimes the model will incorporate bits of that object into the returned function call - sort of a mashup of the argument schema with the implied schema from the JSON object in the user message content.

In my case, the user message content contained a JSON encoded dictionary at the end of the user prompt. Pre-0613, I found that this improved the reliability of processing – somehow having it in that structured format really helped when you were trying to get the model to behave a little more predictably. Now though, I’m having to rework a few of these prompts if I want to get them to work properly with function calls. It’s not really a problem, it just means I have to use something other than JSON (changing that one thing does wonders)

Also observed here, I saw a workaround where someone tricked it by defining a “multifunction” that takes multiple functions are arguments.

Eg: search_for_user_and_email_them

It is quite annoying and costly, I hope it is fixed, prior to functions I simulated stuff using !command as special token and GPT-4 was able to generate multiple commands in one go. The limitation appears to be arbitrary.

I don’t think this is a problem in practice at Discourse we simply render something like this to the user:

Seen this as well, but I would blame me… more than it :slight_smile:

A very “obvious in retrospect” issue is that the function result prompt MUST have the function args in it, otherwise the model has no idea what it called:

For example, if you find no results do not use this as body:

[]

Instead use

{
    "results" : [],
    "query": "the man on the moon"
}

The way the model is very unlikely to go into a look searching for “man on the moon” over and over.

Additionally, I find that adding a little bit of extra guidance in the system prompt helps a fair bit, eg:

1 Like

Qs: Is this for a plugin or for API development? If it’s for plugin, have you also updated your “description for model” in the json manifest?

Json manifest: Although if your plugin is published modifying the json manifest can cause it to be removed from the plugin store and not ideal. However, I know that field “description for model” can affect the output behavior (i.e. I would explain in the description for model section how the added function calling works so that chat-gpt is aligned and not creating a conflict with the code in your main program files), as a potential troubleshooting step.

For example, I am able to do multiple API calls from my plugin (albeit all from the same API), with no native function calling used in my app, so long as all the requests are made in a single prompt (see here: https://chat.openai.com/c/fd9e1299-3cae-4b58-bbc7-fb759af2a061)

Another potential tip, if you have a plugin, ask Chat-GPT with the plugin loaded how your plugin name works, where "name_for_model: "your plugin name" is the description in your json manifest, where it should respond by reading from your “description_for_model” field with a high level overview. And if you haven’t built out the latter description sufficiently, you can paste the main program files code into Chat-GPT ask it to summarize the code as a description for the model and then paste it in the above-mentioned description for model section.

I’ve found this can help with a sort of fine-tuning the output behavior, but realize there may be better ways to achieve that, since the json manifest is not something you can change easily without having the plugin removed from the store if it is already live (but obviously can be experimented with while in development mode). Hope that helps! Cheers

1 Like

@t.haferlach did you or anyone else ever figure out how to make it dynamically be able to do more than one call?

I have had no trouble getting GPT to request multiple function calls in the same message, as long as I specifically tell it to do so within my system message prompt.

I’ll be preparing a repository with it shortly, showing what we do in our products. So far, I’ve managed to get 11 unique and useful function calls in the same message. The only time I’ve gotten more calls, is when it’s stuck in a loop.

I’ve also found appending the response verbatim to be very expensive on tokens, so recommend using a continual summarizer.

2 Likes

I was having the same issue but turned out to be because I had an errand user message in between function messages