Functions not always returning a valid JSON object

Testing the functions within an assistant/thread architecture since a few days, I encountered an issue on the formatting of the output JSON of the function calling.

When providing for example the following parameters to my function, the output sometimes is valid, sometimes not. (following the output the library user, for example in nodejs, would be able to JSON.parse it)

Function params

parameters: {
        type: 'object',
        properties: {
            command: {
                type: 'string',
                description: 'blablabla',
        required: ['command'],

Output OK

{ "command": "content of the command returned by gpt" }

Output KO

{ command: "content of the command returned by gpt" }

Is there a way to avoid that ?


The AI has to generate language in the right way when it produces the text that will call a function.

It is subject to the same token sampling methods, where default temperature and default top-p allows alternate and random tokens to be generated.

For example, the first character to put inside your json, there might be 80% certainty that it should start with ", but that also leaves 20% certainty of “command” or other tokens.

You can do two things:

  • increase the probability. Clear property names and descriptions that are not confusing. System messages such as “namespace tools API must use valid JSON format”.
  • decrease the alternates. An API parameter such as top-p = 0.8 discards the tail of remaining 20% token logits, and it can be set even lower and still have some language creativity.

Agree. In your instructions, be explicit that the assistant “must strictly adhere” to the JSON format.

I have been using function calling to generate a JSON string with multiple properties as input for an SQL query. It is tricky but following multiple tests I have found that being as detailed and specific as you can in instructions can make all the difference. Also, if for certain properties you require specific values, then one way is to either list those values in the functions property, for example.

“property name”: {
“type”: “string”,
“description”: “blablabla (choose from: value A, value B, value C, value D)”

If you have a longer list of values, then it can also work to upload the list of acceptable values in a file and then explicitly state in your instructions that the assistant must chose from the list of values specified in the file for a certain property in the JSON string.

I tried a few optimizations, mainly asking the model to adhere to JSON strict formatting and also by touching a bit the proabilities, but I want to keep the language as maximum creative on my test case.

I ended to add a library such as jsonrepair between the function call output and its JSON parse to sanitize the output before any use. Works like a charm

IMHO functions being supposed to be formatted, such sanitization should be done on the SDK side before.

1 Like

I think the jsonrepair is a good solution. Will bear that in mind myself.

1 Like