i noticed some inconsistency in the tool call recently,
{
"type": "function",
"function": {
"name": "verify_user",
"description": "requires user to input their account number",
"parameters": {
"type": "object",
"properties": {
"account_number": {
"description": "the user's account number",
"type": "string"
}
},
"required": [
"account_number"
]
}
}
until yesterday, the model always ask the user for their account number, now, it assume that it already know the account number and filling a default either:
function args {‘account_number’: ‘xxxxxx’}
or
function args {‘account_number’: ‘123456’}
i tried gpt-4-turbo to test and seems to be working fine
I have the same issue, I assume. When i started feeding to the chat model some data, it understands it, but runs the tool several times and then stops due to large number of agent execution. It fetches ok though from the chat itself. I am using langchain. But when i changed the model from o to turbo - all worked. The 4o still is handling prompts ok and provides the desired more less output via promps, but fails with functions. I think it is a bug on OpenAI site, they claim to have 4o as the more stable and the latest, but in reality turbo works better and way more stable. This issue I have seen before from time to time, but from 30.08 it is consistent and even forced me to re-write the code today 2 hours before presentation. Is there any intentional degradation of the model?
I was looking at this too, I saw the structure changed, I updated to this structure but still the same issue, unless I changed to Turbo then it works however other issue also exist in turbo i.e. function to handover to human suddenly doesn’t get called.