Gpt-4-vision-preview with response_format

Does anyone know if there’s a plan to include the response_format parameter on the gpt-4-vision-preview model? I’m trying to extract structured data from images.

Hi @batten.tyler

response_format ensures that the generation will be JSON parsable. It doesn’t make the model generate JSON.

You can write the system message prompt and describe the structure of the json object you wish to receive, and the model will respond with that (most of the time).

1 Like

gpt-4-turbo models have an added bonus: they think they are in ChatGPT and love to put stuff in code blocks, even when completely inappropriate for the role and task given to the AI.

So for reliable JSON output, you also have to say “markdown output is prohibited”, “AI is a backend processor without markdown render environment”, “you are communicating with an API, not a user”, “Begin all AI responses with the character ‘{’ to produce valid JSON”.

And then you can hit token 63, 14196, 74694 with a negative logit_bias for good measure to stop the backticks.

response_format just ensures that the AI goes into a repeating loop of linefeeds and tabs if JSON content is not properly specified.


YOU ARE A GODSEND. thanks for the help. I’ve been stuck on trying to get valid json and my json.loads wasn’t working and my noob ass couldn’t understand why.

1 Like