Ask for function arguments as JSON

When GPT wants to call a function and is missing some required arguments, it asks the user for them. I would like to collect the arguments from the user one by one in a controlled way to do some validation and avoid confusion. Is there a good way to prompt GPT to ask for missing arguments in a structured way, say, as a JSON? With the missing argument names as an easily extractable list.

Hey there and welcome to the community!

This sounds like you’re using the API, am I correct?

So, my first question is simple: Are you asking this for yourself, or for your users? Essentially, Are you trying to ease the burden on users if it asks for more arguments, or are you trying to figure out a better prompting method for yourself so whatever you’re working on is easier to handle? There are prompt engineering techniques you could pull off if this is for you, but a system prompt for a custom GPT for example may require different techniques, because it’d need to work “out of the box”.

This sounds like a classic case of “Chain of Thought” prompting, whereby the user makes requests in smaller chunks in a step-by-step manner to help the LLM achieve their goal. This is a great method, works well, and many of us power users have utilized this before there were even researchers putting a name to it. The problem though is that CoT prompting is a user-level technique, meaning the user needs to know what that is, and subsequently needs to do it in order to make the thing happen. Correct me if I’m wrong, but it almost sounds like you want the LLM, not the user, initiate such a chain, right? Instead of user walking AI through the problem, the AI walks the user through what they need on a step-by-step basis for it to achieve its goal.

Keep in mind as well, the above technique is considered a kind of “structure”. I’m unsure though if this is the kind of structure you meant, or if you mean something more like a strict format that GPT abides by when it needs to ask these kinds of questions.

Either way, this is gonna be tougher, because there’s going to need to be a lot of trial an error in the system prompt to ensure it adheres to the desired structure at-will, which gets increasingly difficult for it to follow the more context exists. It may forget and need to be reminded to use such a format if the sole purpose of interaction isn’t precisely for this. Which, again, is difficult if the user isn’t aware of this at all, and may defeat the purpose entirely.

I’ll use this as a starting point, and we can see how useful this information is for you already and work toward helping you achieve what you want.

1 Like