Calling a function without parameters

I have the following function description for a function called get_solution() which has no input parameters. The function should be called once the prompt involves a question about detecting anomalies. The function returns a string which the model should use in the prompt.

{
    "function": {
        "description": "Call this when you are asked this question: Given the following dataset, where is the anomaly?",
        "name": "get_solution"
    },
    "type": "function"
}
def get_solution():
     df = pd.read_csv("solution_data.csv")
     # Compute solution
    return solution

I see in the api response that the function has been called but don’t think the output of the function is used in the prompt. Is my described use case possible?

Thanks!

Not sure if you are using any library shortcuts, you don’t even state if you are using raw Completions or the Assistant API … but either way:

I assume you have code to:

  • interpret the function call and action it locally
  • return the answer from the function to the llm in the correct format along with all the necessary history?
1 Like

Thanks for the quick reply. I am using the completions API:

api_response = openai.chat.completions.create(
    model=model,
    messages=message,
    temperature=temperature,
    max_tokens=max_gen,
    seed=seed,
    tools=tools,
    tool_choice="required",
)

Yes, I have code to interpret the function call and run it locally. Could you please elaborate on what you mean with returning the anwser from the function to the llm?

I am following this tutorial: https://platform.openai.com/docs/guides/function-calling#integration-guide

Sure … and sometimes an example is worth a thousand words:

[
  {
    "role": "assistant",
    "content": "",
    "tool_calls": [
      {
        "id": "call_pJEjP3m4NSIa73gQFLB0kj6K",
        "type": "function",
        "function": {
          "name": "calculate",
          "arguments": "{\"input\":\"Math.cbrt(4) + Math.sqrt(7)\"}"
        }
      }
    ]
  },
  {
    "role": "tool",
    "tool_call_id": "call_pJEjP3m4NSIa73gQFLB0kj6K",
    "content": "4.23315236303279"
  }
]

This is what is fed back to the LLM so it can continue on its way …

The important part here for your notice is:

  {
    "role": "tool",
    "tool_call_id": "call_pJEjP3m4NSIa73gQFLB0kj6K",
    "content": "4.23315236303279"
  }

which is your response to the function call.

Note whether or not you’ve used parameters is irrelevant. You still need to send the answer back to the LLM.

Note with Completions you send the whole conversation inc. the function calls and local answers each and every time until you reach your preferred “window” size (though once you have processed a QnA call and response you could just leave the natural language question and answer history without the function call and response if you prefer)

2 Likes

@merefield Thanks for the explanation, everything works now!

1 Like