Converting a ReAct prompt to use function-calling?

Hey folks,

I have an existing ReAct prompt that references a couple of tools and works (for the most part!). With the release of the function-calling API, I’m interested in seeing if I can use this instead. I’d like to have a more structured completion format.

I’m curious if anyone else has migrated from a ReAct prompt to function-calling? I’m getting decent results converting to a function-calling-based prompt, but curious if anyone father along than me has tips.

For context, the application I’m working on (LogPal.ai) lets me chat in natural language with application log files, running analysis via SQL queries and generating charts w/Javascript.

Here’s a diff of how I’ve changed the prompt … so far, it feels rough (still has some references to action / action_input which are now really function name / arguments) but seems to be working:

Answer the following questions as best you can.
 
 If the user demands a final answer but you are unsure, execute the thought/action/action input/observation sequence.  
 
-You have access to the following actions:
-
-Query: useful for when you need to information about the data in the logs. For example: type of data, requests, errors, performance, response times, and stuff related to application performance in general
-Chart: useful when you need to generate a chart to answer a question about information contained within log files. Always use this if the user's question contains the word "chart".
-
 Description of the data in the logs:
 %%LOG_DESCRIPTION%%
 
-Use the following format (in correct JSON format), including the markdown code ``` backticks. Be consise: do not return text outside of the ```json block.
+If you know the final answer, return an answer in the following format (in correct JSON format), including the markdown code ``` backticks. Be consise: do not return text outside of the ```json block.
 
 ```json
 {
     "question": "the input question you must answer",
     "thought": "you should always think about what to do",
-    "action": "the action to take, should be one from this list: [QueryDirective, Chart]",
-    "action_input": "the input to the action. use natural language not code.",
-    "observation": "the result of the action. DO NOT INCLUDE AN observation IF YOU KNOW THE ANSWER!",
+    "observation": "the result of a function_call. DO NOT INCLUDE AN observation IF YOU KNOW THE ANSWER!",
 
-    ... (this Thought/Action/Action Input/Observation sequence can repeat N times) ...
+    ... (this Thought/Observation sequence can repeat N times) ...
 
     "thought": "I know the final answer",
     "final_answer": "the final answer to the original input question. DO NOT FORGET THIS LINE IF YOU KNOW THE ANSWER!"

Also, a couple of gotchas I ran into:

  1. If you are sending the result of function calls as messages, these should have `“role”: “function” and “name”: “<function name”> values. Without these, I’d get stuck in a loop.
  2. When streaming, it appears that the first message contains valid JSON (ex: {"name"=>"query", "arguments"=>""}) and then the deltas look something like {"name"=>"query", "arguments"=>"{\n"}. So, I merge those arguments in as they come through but keey the rest of the Hash the same.

This is with gpt-4-0613.

4 Likes

You might want to create the right prompts with the help of the api.

Try something like:

You are a Programmer and you need to tell a nontechnical project manager on how to use the following API/Function, etc…

1 Like

Thanks! I may have been over-thinking things. This seems to do it:

Answer the following questions as best you can. 

If the user demands a final answer but you are unsure, consider executing a function call to gather more information.  

The result of a function call will be added to the conversation history as an observation.

Description of the data in the logs:
%%LOG_DESCRIPTION%%
2 Likes

I am trying to do this exact thing. I have been finding the OpenAI response contains both system message and system functions about 2/3 times. What luck have you been having with this?

Are you able to share your prompts or code so I can compare and perhaps improve mine?

For now I think the way to make it consistent would be 2 OpenAI calls for each step.

  1. generate a plan by asking for a system message response
  2. execute the next step in the plan by asking for a system function response

But don’t really want to double the time and tokens

my armature level attempt so far is here

Hi @liamgwallace

  1. generate a plan by asking for a system message response
  1. execute the next step in the plan by asking for a system function response

I ended up doing almost this, except I leave step (2) as auto.

I have been finding the OpenAI response contains both system message and system functions about 2/3 times. What luck have you been having with this?

It’s significantly less than this. The oddest behavior I’ll get is that in step 1, it frequently just spits out the raw JSON function call.

My prompt:

  You are a Site Reliability Engineer. I am an engineer on your team. My questions are specific to resources we have deployed, not for the operational status of AWS services. Do not investigate the operational status of AWS services (ex: via their status page). Answer the following questions as best you can. DO NOT ANSWER QUESTIONS if they are unrelated to gathering data and making observations about AWS!

  If applicable, observe what you've learned and then ALWAYS share your plan in 3 sentences or less without numbered steps, command names, references to arguments, and code samples. Your plan can only use the provided functions. You CANNOT access logs so don't include this in your plan.
  The result of each function call will be added to the conversation history as an observation.

  AWS REGION: %%AWS_REGION%%

  If you need a timeframe and I don't refer to one, use the following timerange: %%TIMEFRAME%%
1 Like

ah yes that is a lot less tokens than mine. I cant help but be wordy with mine.

System Prompt

"""
You are professional level decision making assistant.
You have access to a variety of functions that can help the user. 
You will do 3 things.
 1. create a step by step plan and output this as a system message
 2. pick the next action on the plan and use the functions at your disposal to output the function and details as a system function
 3. If the plan is complete and you have enough to respond to the user then ignore 1 and 2 and output the response to the user in a system message
 
 
1. Planning
Let's devise a step-by-step plan to address the user query .
When outlining each step, provide a succinct description.
If you can foresee the need for any functions or parameters, mention them explicitly.
If unsure, use placeholders.

First, list all the tools you have at your disposal.
Then list ones that might help answer this question.
There may be many steps required, so start by searching for the information you need to make any function requests.
Don't assume anything, you dont know anything other than the information you have been told.

Each step should be in the following format:

{
  'Subtask': '<details of the problem and goal>',
  'Reasoning': '<small step to solve the problem>',
  'Function': '<list the function call that might help this step>',
  'Parameters': '<additional parameters to pass to the function>'
}

Represent the plan in the following format:

{
  'Complete_steps': [<list of steps>],
  'Current_step': [<list of steps>],
  'Next_steps': [<list of steps>]
}

2. Use a function
You must choose the next function to call in order to help the user where possible. 
Refer to the plan and any results from previous function calls you have made to select the function and populate the correct fields
Make your best attempt to call a function where possible. Only respond to the user if you cannot solve the task or have solved the task.
Only use the functions you have been provided with. Do not guess at the data needed for a function, try and see if you can search for what is needed using a function at your disposal.

3. Task complete :)
DO NOT CALL A FUNCTION. Just return a response to the user.
"""

Example response appended to system prompt

%%%Example Question:
{find a good day this week to walk the dog and add it to my todos.}

%%%Example Answer
{
  Available functions:[list],
  Functions I may need for this task:[list],
  Plan:{
  "Complete_steps": [],
  "Current_step": [],
  "Next_steps": [
    {
      "Subtask": "Check the weather forecast for the current week",
      "Reasoning": "First I need to find a good day to walk the dog, we need to retrieve the weather forecast for the current week.",
      "Function": "pw_get_weather_forecast",
      "Parameters": {
        "location": "Cranleigh, Surrey",
        "forecast_type": ["daily"]
      }
    },
    {
      "Subtask": "Store the selected day in the todos",
      "Reasoning": "From the returned forecast I need to select the best day. e.g. avoid rain and wind. Si I can create a new todo item with the task 'Walk the dog' and the selected day as the due date.",
      "Function": "create_todo",
      "Parameters": {
        "todo": {
          "task": "walk dog on <best day and date>"
        }
      }
    },
    {
      "Subtask": "Return short output to user with my choice and basic reason. e.g. "I have checked this weeks forecast and <day&date> looks the nicest day for a walk. I have added it to your todo list.",
      "Reasoning": "I now have all the information needed to respond to the user, no further function calls are needed.",
      "Function": "none",
    }
  ]
}

Thank you for sharing!

Generally, I observe worse accuracy the more I put into the system prompt. Have you experimented with:

  1. Removing step (2)

  2. Drop step (3), and add a function call called “task_complete”.