Assistant API function: responses in markdown and not JSON

I thought the whole point of “functions” was for the AI to give you responses as an easy-to-parse JSON object.

I wrote a function. GPT-4-turbo is giving me answers in markdown. They are organized properly and correct, but I was expecting JSON.

I can parse the markdown (not as easy for DB work) or, how do I force it to reply in JSON?

1 Like

Are you giving it a schema?

Making sure the last thing the model sees is Start your response with { also helps

Finally, you can try to suppress ``` with a negative logit bias and encourage { with a positive one. https://platform.openai.com/docs/api-reference/chat/create#chat-create-logit_bias


edit: But if you’re using functions, and the model isn’t using the functions, the problem would rather be that maybe it doesn’t understand when or how to use your function? :thinking: iterating on your prompt, reminding the model to use the function might help.

Literally just tell it to respond with JSON and be specific about what attributes for it to use. Also make sure to tell it ONLY to return JSON.
3.5 models will not be able to do this well consistently, just FYI.

Thanks guys, yes “tell it to” was the obvious answer, but that still isn’t 100%

good news though:
the new Assistants V2 API has a checkbox for “Reply in JSON”
(I assume that will be 100%)