Summary created by AI.
The discussion revolves around the usage of the new JSON mode for chat completions on the OpenAI platform. chickenlord888 initiates conversation, asking for guidance on enabling JSON mode using Python code based on an ambiguous guide in the OpenAI documentation. PaulBellow shares an exemplary Python code using his understanding of the guide, detailing each part of the code to help chickenlord888.
However, krthr runs into problems with the new JSON mode while using it with gpt-4-vision-preview
, resulting in a 400 Bad Request error. chickenlord888 says that the code works with the gpt-3.5-turbo-1106
model and relays a need for it to work with the Vision model. maybeno suggests a code sample based on the documentation for gpt-3.5-turbo
.
Nevertheless, bleugreen and ryandetzel report issues using the API with JSON mode, citing unreliable results and the necessity to use the word “json” in the prompt. On the other hand, preston.mccauley provides a Python function to classify an image using the new JSON mode in the OpenAI API with the gpt-4-vision-preview
model. tiffiana points out that preston.mccauley’s code example lacks the response_format
parameter in the payload.
Continuing with his previous contribution, preston.mccauley suggests it may be a bug in the OpenAI rate-limiting process causing his previous code to fail and provides modified code. wenquai shares comprehensive guidance on achieving the desired functionality via nodeJS, emphasizing the impact of shaping the prompt correctly. However, tiffiana informs the conversation that OpenAI has modified its API documents and that the gpt-4-vision-preview
model no longer supports JSON mode. Lastly, rokbenko shares a YouTube tutorial and GitHub repository, which houses full code examples to get responses in JSON format using Python and Node.js.
Summarized with AI on Dec 24 2023
AI used: gpt-4-32k