How do I use the new JSON mode?

gpt-3.5-turbo-1106 does indeed work – thanks!

But what I really need is it for to work with Vision (like @krthr). I guess I’ll wait a bit

1 Like

Based on the documentation for gpt-3.5-turbo.

curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
  "model": "gpt-3.5-turbo-1106",
  "response_format": {"type": "json_object"},
  "messages": [
    {
     "role": "system",
      "content": "You are a helpful assistant that extracts data and returns it in JSON format." 
    },
    {
      "role": "user",
      "content": "What is the weather like in Boston?"
    }
  ],
  "functions": [
    {
      "name": "get_current_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          },
          "unit": {
            "type": "string",
            "enum": ["celsius", "fahrenheit"]
          }
        },
        "required": ["location"]
      }
    }
  ],
  "function_call": "auto"
}'

So there is no support for gpt-4-1106?

1 Like

{
“model”: “gpt-3.5-turbo-1106”,
“messages”: [
{
“role”: “user”,
“content”: “What is the weather like in Boston?”
}
],
“response_format”: “json_object”
}

1 Like

The api works (model="gpt-4-1106-preview", response_format={ "type": "json_object" }) but I’m not getting reliable results. There’s this error:

BadRequestError: {'error': {'message': "'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'."

which is interesting, but if I’m not direct about ‘as json’ and just include it somewhere random it spins for 1m+ and then returns a couple hundred ‘\n’ (guessing this is some sort of regex state-machine loop it gets stuck in when it wants to break json formatting)

1 Like
import os
from openai import OpenAI
from google.colab import userdata

messages = [
  {"role": "system", "content": "List of months that have 30 days in json"},
]

client = OpenAI(api_key=userdata.get('OPENAI_API_KEY'))

response = completion = client.chat.completions.create(
  model="gpt-4-1106-preview",
  messages=messages,
  response_format= { "type":"json_object" }
)

print(completion.choices[0].message.content)

You have to have the response format AND you have to put the word json in your prompt

2 Likes

+1 it looks like function calling and using the response_format="json_object" feature don’t work specifically when you’re using the vision model right now:

from openai import OpenAI
client = OpenAI()

functions = [
    {
        "name": "classify_animal",
        "description": "Classify the animal in a given image",
        "parameters": {
            "type": "object",
            "properties": {
                "type": {
                    "type": "string",
                    "description": "The type of animal",
                },
                "unit": {"type": "string", "enum": ["dog", "cat", "fish"]},
            },
            "required": ["type"],
        },
    }
]
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "text",
                "text": "What animal is in the picture?"
            },
            {
                "type": "image_url",
                "image_url": base64_urlencode_image(dog)
            }
        ]
    }
]
completion = client.chat.completions.create(
    model="gpt-4-vision-preview",
    response_format="json",
    messages=messages,
    functions=functions,
    function_call="auto"
)

print(completion)
BadRequestError: Error code: 400 - {'error': {'message': '3 validation errors for Request\nbody -> function_call\n  extra fields not permitted (type=value_error.extra)\nbody -> functions\n  extra fields not permitted (type=value_error.extra)\nbody -> response_format\n  extra fields not permitted (type=value_error.extra)', 'type': 'invalid_request_error', 'param': None, 'code': None}}
4 Likes

Noticed the same issue. Only the gpt-4 turbo model can accept response_format field.

1 Like

I cann’t get JSON mode to work with gpt-4-vision-preview, but gpt-3.5-turbo-1106 and gpt-4-1106-preview work.

Maybe I need gpt-4-vision-1106-preview which doesn’t exist in my model list. :frowning:

One thing I noticed: the message itself needs to contain the word “JSON.” Try appending that to the beginning/end of the message payload, or even embed it in the image itself!

1 Like

If the answer hasn’t happened using the latest, I got it working like this tonight: This was with the new vision gpt-4.

Function to classify an image using the OpenAI API

def classify_image_with_openai(api_key, image_path):
base64_image = encode_image_for_api(image_path)

headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {api_key}"
}

payload = {
    "model": "gpt-4-vision-preview",
    "messages": [
    {
        "role": "user",
        "content": [
        {
            "type": "text",
            "text": "Provide a brief summary of this image, highlighting the main objects, the setting, any apparent activities, the mood, and notable colors or styles present. Return a detailed description of the image, including at least 10 objects present in the image as a list with the type of object and confidence level. In this format [object: percentage confidence, object2: percentage confidence]"
        },
        {
            "type": "image_url",
            "image_url": {
            "url": f"data:image/jpeg;base64,{base64_image}"
            }
        }
        ]
    }
    ],
    "max_tokens": 300
}

response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
if response.status_code == 200:
    logging.info("Image classification results received.")
    return response.json()
else:
    logging.error(f"OpenAI API error: {response.text}")
    return None

Do you mind posting the entire request including prompt? I still get the same error even if I mention JSON in system as well as message. response_format={“type”: “json_object”} throws an error. I updated to latest openai 1.1.1.

1 Like

Thanks for updating the code with prompt.
However, the example you provided doesn’t have “response_format”:{“type”: “json_object”} in payload.
This will output json, but output may contains other text before/after json string.
Whereas JSON mode ensures that the output only contains a valid json string.

1 Like

This worked with ’ gpt-4-1106-preview’, you must have JSON in the message sent.

{
    "model": "gpt-4-1106-preview",
    "response-format": {"type": "json_object"},
    "messages": [
        {"role": "system", "content": "Answer in json format"},
        {"role": "user", "content": "Hello whats up?"}
    ],
    "temperature": 0.0
}

I also get “Unrecognized request argument supplied: response-format” all the time

When we get JSON output, I’m still seeing that the message response is still just text and the “json” shows \n (newlines). Is this expected output?

I’m also trying gpt4-vision and json output without success.

    headers = {
        "Content-Type": "application/json",
        "Authorization": f"Bearer {api_key}"
    }

    payload = {
        "model": "gpt-4-vision-preview",
        "messages": [
        {
            "role": "user",
            "content": [
            {
                "type": "text",
                "text": prompt
            },
            {
                "type": "image_url",
                "image_url": {
                    "url": f"data:image/jpeg;base64,{base64_image}",
                    "detail": 'low',
                }
            }
            ]
        }
        ],
        "max_tokens": max_tokens,
        #"response_format": response_format, # This is not currently working on gpt-4-vision-preview
        "response_format": { "type": "json_object" }
    }
    response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)

I get this response:

{'error': {'message': '1 validation error for Request\nbody -> response_format\n  extra fields not permitted (type=value_error.extra)',
  'type': 'invalid_request_error',
  'param': None,
  'code': None}}
1 Like

Yea I can get it to work with gpt-3.5-turbo-1106 and gpt-4-1106-preview, but not gpt-4-vision-preview.

2 Likes

I wonder if on the 1106 preview it hasn’t rolled out to everyone yet. I posted my response object above. Once again this is what I was using last night for the vision, not the gpt 4-1106

You mean the code where you neither specify the response format nor use “json” in your API language and it only produces json-like output because you explicitly prompted for a “list” that is not a list?

Yes, that will work on the vision model.

(to get an error)