How to Make Assistant Return JSON?

Hi everyone.

Is there a way to use the json_object response format with the assistant api?

I’ve tried the following but it brings up an “extra fields not permitted” error.


const myAssistant = await openai.beta.assistants.create({
      instructions:
        "Cool instruction",
      name: "Cool Assistant",
      tools: [{ type: "retrieval" }],
      model: "gpt-4-1106-preview",
      response_format: {type: "json_object"},
      file_ids: [],
    });

error:

BadRequestError: 400 1 validation error for Request
body → response_format
extra fields not permitted (type=value_error.extra)

2 Likes

Afraid JSON mode is not yet supported in the Assistants API, as of Nov 2023.

5 Likes

I am taking the assistant output and passing it though the chat completion with json mode enabled. It’s a little clunky but it works until its supported directly.

2 Likes

+1 for this feature. Was so excited about json mode and also Assistants. Bummer that it doesn’t work yet for assistants.

1 Like

I’m trying to instruct the model to return just JSON but it keeps alternating between just json and json + extra instructions. Do you know any prompt that will return just JSON in an consistent way?

1 Like

Honestly, I’ve had great success by adding, “Please don’t write anything in your reply outside of this JSON.” or “DO NOT output any additional text outside of the JSON.” after describing the structure of the JSON it needs to populate.

It seems ridiculous, but in a few hundred runs I haven’t had any mistakes so far.

3 Likes

json-mode is not supported on assistant API, but it’s possible to work around it:

use Function Calling, define argument to have the schema you want to have in your response, ask the model to always use only the function in responses, then just parse the function arguments when model responds. This way:

  • you always get structured response with the right schema
  • even if model doesn’t listen and returns extra text, JSON parsing won’t fail (because JSON payload is a separate field in the API response)

I wrote a blog post with the complete example: Ensuring JSON Response Format in OpenAI Assistant API · Scorpil

7 Likes

I’m assuming this “workaround” then bricks the ability for the assistant to call any other defined function, as it will always call this function?

Not at all. I tell assistant to always use response to illustrate the method, but it’s not a must. You could, for example, tell it to use response tool once all necessary information is present. Or instruct it any other way.

1 Like

Simple quick workaround without using function and additional call is just to tell it to return in json format with specific keys. Can provide context for the format if the requested format isn’t too complex. If using python etc, then parse the formatted string.

with the workaround proposed by @scorpil, the run will never go to status completed, correct?

As far as I see it - yes. The approach itself works. And under this approach, you do not need the final message on a “complete” status. The run will get the status “expired” eventually. I used this https://platform.openai.com/docs/api-reference/runs/cancelRun to cancel the run on the “requires_action” stage. Maybe someone will come up with a better solution

The new version of Assistants broke the workaround. I can’t seem the get the requires_action status anymore. There is an option in the configuration to Return JSON result but it only works if the Tools are Functions and I need to convert a DocX to JSON so that requires attachment.