I have observed some behaviors where the generated JSON response lacks properties that I expected to receive in the JSON structure.
All procedures in the prompt to enable JSON mode were followed:
"model": "gpt-4o",
"response_format": {
"type": "json_object"
},
In the JSON, the required properties were duly configured as in the snippet:
"required": [
"proofOfgain",
"proofOfGainType"
]
And in the user and system messages:
The response must be in JSON format.
Even so, the generated response had missing properties. This made me suspect that the gpt4-o model is still not compatible with JSON mode, and I couldn’t find anything in the documentation that explicitly states it is compatible.
Could someone help me with this doubt?
According to the official documentation it is compatible with JSON mode:
To prevent these errors and improve model performance, when using gpt-4o
, gpt-4-turbo
, or gpt-3.5-turbo
, you can set response_format to { "type": "json_object" }
to enable JSON mode. When JSON mode is enabled, the model is constrained to only generate strings that parse into valid JSON object.
Source: https://platform.openai.com/docs/guides/text-generation/json-mode
Perhaps you need to make some adjustments to your prompt with regards to the expected properties?
1 Like
Thank you very much for the feedback @jr.2509 . Apparently Json mode only guarantees that there are no syntax errors in Json, but it does not guarantee respecting the schema structure, that is, even placing properties as mandatory in the json structure, it may still not appear in the final response.
1 Like
Yes, that’s my understanding and experience, too. You can always consider adding an example to your prompt.
1 Like