I have the following function description for a function called get_solution() which has no input parameters. The function should be called once the prompt involves a question about detecting anomalies. The function returns a string which the model should use in the prompt.
{
"function": {
"description": "Call this when you are asked this question: Given the following dataset, where is the anomaly?",
"name": "get_solution"
},
"type": "function"
}
I see in the api response that the function has been called but don’t think the output of the function is used in the prompt. Is my described use case possible?
Yes, I have code to interpret the function call and run it locally. Could you please elaborate on what you mean with returning the anwser from the function to the llm?
Note whether or not you’ve used parameters is irrelevant. You still need to send the answer back to the LLM.
Note with Completions you send the whole conversation inc. the function calls and local answers each and every time until you reach your preferred “window” size (though once you have processed a QnA call and response you could just leave the natural language question and answer history without the function call and response if you prefer)