I’ve encountered an issue with custom assistant functions that I’ve noticed others struggling with as well, so I wanted to share some insights.
Issue: If you modify a custom function by adding new parameters after it’s already been called in a thread, subsequent function calls within the same thread might not pass the newly required parameters.
Example Scenario: Consider a function defined as follows:
{
"name": "get_weather",
"description": "Determine weather in my location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state e.g., San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["c", "f"]
}
},
"required": ["location"]
}
}
Let’s say you run this assistant thread with the prompt “Get the weather in San Francisco”, and the function runs without passing the “unit” argument. Later, if you modify the function to make “unit” a required parameter, the function might not return that parameter when called again in the same thread.
Hypothesis: This could be because the Large Language Model (LLM) recognizes the previous version of the function where the “unit” parameter wasn’t required, and thus doesn’t include it in subsequent calls.
Solution: To ensure your modified functions work as intended, start a new thread after any modifications. This helps avoid issues where it seems like the LLM isn’t recognizing your updated parameter requirements.
I hope this tip helps others who might be facing similar challenges!