A 'Run' in action_required for a function call, but provided arguments for function call is invalid

I have defined a functiion for the chat agent to call when necessary. the function has mandatory fields, which is expected to be extracted from the user’s chat added to the thread. While testing, I noticed the extracted arguments for the function call from the user’s input was wrong. At this stage the ‘run’ action is in the state ‘action_required’ and requesting from me to submit the output of the function call. But LLM extracted the wrong arguments for the function call. Therefor I cannot make the function call. From the documentation I cannot cancel the run with this status either. How can I proceed, if the arguments extracted and returned to my application to make the function call is invalid. What can I do next?

1 Like

Send an error message back. Something like:

{"status": "error", "message": "invalid parameters"}

and let the API sort it out by itself, usually by running another call, etc.

You may also add some more info in the message about the nature of the error in order for the API know what to do, like:

{"status": "error", "message": "email is invalid. please ask user to provide valid email."}

Often times the API will be able to sort it out.

2 Likes

Thanks you very much for the suggestion supershaneski. :+1:

I got this idea too while reading thru the suggestion from OpenAI support to request the details from the user, but thought I’d send it as the output instead. Then funny enough my coding assistant completed it for me before I even started typing :smiley:

I wasn’t thinking of that as a means to let the LLM agent know it made a mistake or forgot something. But it it worked.

Also feel free to share your function template - I have made quite a few errors in the past with them. It’s very easy to have tiny mistakes

The function template is not really the problem here, as I clearly and strictly states the requirements needed and should be taken from user input. Or ask user to provide the input if they did not or in unclear which value to extract.

I was testing this exact feature by leaving the information out on purpose or making obfuscating it a bit. Instead the LLM will just put a 0 or some random number without enquiring it from the user.
I fixed that part by instructing the LLM to cite which part of the user input it extracted the argument from.

Share or don’t share - what I’m trying to say is that there are many ways that the template can have either ‘errors’ - or misunderstandings that have caused. Things like ‘required’ in the wrong place in the json. different required names vs actual keywords. Ambigeous language. I have done it all. And the interplay between the prompt and the functions.

Hi again. Thanks for offering your support, and I don’t mean to be secretive of a function template or anything. I’m just away from my setup right now, and also didn’t think it’s necessary since the issue was resolved.

But I’ll get in touch if I have any issues again.
Thanks you and have a nice day

1 Like