Hello,
I am working on a chatbot that looks through construction documents. Using the new function calling has been great, but I have been reaching an issue with the prompting of the function itself. GPT seems to like to call my function even when it is not entirely necessary, and then when the function fails, likes to respond that the function is failed, rather than use the other information provided to answer the user’s question. For example, here is my problematic function:
{
name: "feature_counter",
description: "Counts the number of features, i.e. doors, toilets, parking spots. " +
"If an item is not listed, there are none." +
"Specify both the type of each item (Drawing, Set, or Project) and its name. " +
"Returns an array of feature counts for each item specified." +
"If the user specifies a specific name, be sure that it is spelled properly via the list of names. ",
parameters: {
type: "object",
properties: {
list: {
type: "array",
items: {
type: "object",
properties: {
name_type: {
enum: ["Drawing", "Set", "Project"],
description: "The type of this item."
},
name: {
type: "string",
description: "The name of this item."
}
},
description: "An item to count the features of."
},
description: "Array of items to count features of."
}
}
}
}
And when my user prompts something like “Tell me about the walls in this set.”, it loves to call the function, even though I would have already provided the information about the walls through some text in a system message. Specifically, through some embedding searches, I provide text from the construction documents that is most relevant to the user’s prompt.
For my purposes, this function would never be able to count walls, so I have attempted to explain that in the description, but it hasn’t worked for me yet. Have any of you had success with excluding types or names from the possible results of a function?
Have any of you had similar issues with GPT calling the functions when it does not have to? If so, any recommendations on how I could fix this all?
Thanks