{
tools: [
{
type: "function",
function: {
name: "output_json_ts_summary_keys",
description: "Identify in the output possible distinct moments and provide list of these moments in a valid array of JSON objects",
parameters: {
type: "object",
properties: {
ts: {
type: "number",
description: "starting timestamp for the moment in decimal format",
},
summary: {
type: "string",
description: "short summary of the topic during this timeframe. Limit to no more than 12 words.",
},
},
},
},
},
],
tool_choice: {
type: "function",
function: { "name": "output_json_ts_summary_keys" },
},
}
You can just process that into a list of items, which can be a list of one, handle each, iterating over the list (or parallel and async if you want the spirit of parallel tool calls), and then return a list back in your own format for return-writing.
The IDs must be placed properly in the tools return, as you can imagine.
messages.push({ role: "user", content: message });
const tools = [
{
type: "function",
function: {
name: "get_products_with_price",
description:
"Recupera la lista de precios de todos los productos en inventario.",
parameters: {
type: "object",
properties: {},
},
},
},
{
type: "function",
function: {
name: "get_Business_Hours",
description: "Devuelve los horarios de atencion al cliente",
parameters: {
type: "object",
properties: {},
},
},
},
];
const response = await openai.chat.completions.create({
model: "gpt-4-1106-preview", //"gpt-3.5-turbo-1106",
messages: messages,
tools: tools,
tool_choice: "auto", // auto is default, but we'll be explicit
});
const responseMessage = response.choices[0].message;
// Step 2: check if the model wanted to call a function
const toolCalls = responseMessage.tool_calls;
let flagPrecios = false;
if (responseMessage.tool_calls) {
// Step 3: call the function
// Note: the JSON response may not always be valid; be sure to handle errors
const availableFunctions = {
get_products_with_price: getPriceList,
get_Business_Hours: getBusinessHours,
}; //multiple functions
messages.push(responseMessage); // extend conversation with assistant's reply
for (const toolCall of toolCalls) {
const functionName = toolCall.function.name;
if (functionName === "get_products_with_price") {
flagPrecios = true;
}
const functionToCall = availableFunctions[functionName];
const functionArgs = JSON.parse(toolCall.function.arguments);
const functionResponse = await functionToCall();
messages.push({
tool_call_id: toolCall.id,
role: "tool",
name: functionName,
content: functionResponse,
}); // extend conversation with function response
}
const secondResponse = await openai.chat.completions.create({
model: "gpt-3.5-turbo-1106",
messages: messages,
}); // get a new response from the model where it can see the function response
console.log(secondResponse.choices[0].message.content);
Thanks, everyone. I am sorry if I am not following but I believe my issue is a little different.
I need "tool_calls" to have more than 1 item in the response;
This does happen when I set "tool_choice" with "auto";
But not when I am explicit by setting the function name, which I always get 1 entry only;
I can’t choose “auto” because I can’t rely on the model to decide whether it will output as JSON or not; it must always be a parsable JSON content.
I got it now. When you force the use of specific function, the model call the function only once.
I’ve never faced that situation, but I would try to change the model to gpt-4 (in the current model is gpt3.5).
If the behavior persist, I would try changing the parameter type from “object” to “array”. Then your function will expect an array of {ts, resumen} .