Few-shot and function calling

FWIW, I found that when using the method described in Few-shot and function calling - #15 by lucas.godfrey1000 (passing examples as json-as-string in user/content) confused the model and would frequently result in it giving me javascript and not json.

I had better results when passing examples using the same structure as what the API call returns, e.g

[
                  {
                        "role": "user",
                        "content": f"make terms for {ex_text}",
                    },
                    {
                        "role": "assistant",
                        "content": None,
                        "function_call": {
                            "name": FUNC_NAME,
                            "arguments": json.dumps(example),
                        },
                    },
2 Likes

And furthermore, conversation history can have even more context and logic when it includes the AI’s answer based on that function.

{
"role": "user",
"content": "Post a tweet for me 'AI can affect the outside world!'"
},
{
"role": "function",
"name": "twitter_post",
"content": "Twitter API: message to xxx user account success"
},
{
"role": "assistant",
"content": "It's posted, the world should be able to see your message."
},
 {
"role": "user",
"content": "Help with math. What is sin(333)?"
},
{
"role": "function",
"name": "wolfram_alpha",
"content": "-0.00882 rad"
},
{
"role": "assistant",
"content": "In radians, the answer to sin(333) is -0.00882."
},

That will prevent situations where it could loop asking the same function call countless times or doesn’t know how well it answered. It also will be able to then build on the context of its prior answers in formulating new responses asking about them.

Well this is kind of an odd result, but whenever I try to do few-shot learning, I also noticed that it “forgets” the system prompts and examples.

It’s a bit of a quirk, but I’ve found that if you supply the few shot assistant-role prompts like @cjmungall suggests, the results can be better. However, they are not better if you actually supply the function – as it seems to revert back to forgetting the few shot examples.

To get around this, it seems to work only to provide the examples with the assistant responses, but when you call the API, do not provide the actual function for it to use. This causes GPT to infer the function from the examples(!) Thus it uses the context from the examples to provide the result, without explicitly having the schema.

Because you must provide a function to make the function call, leave function call set to “auto” and provide a null function in the functions argument. If you have been explicit enough, GPT will choose the function from the examples, even though it was not provided as a function.

1 Like

Few-shot tuning a function mode model to maximum effect would kind of need you to show the AI what it produces in output for all cases. For functions, a tricky business, different than required for conversation coherency.

user: (do thing you can’t do without a function)
assistant: function_name(json of exact language the AI actually produces)

the AI’s own language to call functions is not documented or seen, but it is essentially that, or delineated by two carriage returns as the last line. (it can be reverse-engineered by introducing fail scenarios and recognition_breaking instruction)

assistant: “”"sure, I can process that using python, let’s see:

obfuscated_python({“name”:“python”, {“name…”“”

But few-shot with gpt-3.5-turbo is pretty pointless. You can include some 2000-token writing responses as examples, and it doesn’t affect the snippet you actually get back.