Really want to use the new JSON mode for Chat Completions. The docs (OpenAI Platform) just say to “set response_format to { type: "json_object" }
to enable JSON mode”, but I’m not sure what this means in terms of Python code. Does anyone know how to do it?
Haven’t tested, but give this a try…
Welcome to the community, chickenlord888!
To use the new JSON mode in the OpenAI API with Python, you would modify your API call to specify the response_format
parameter with the value { type: "json_object" }
. This is how you tell the API that you want the response in JSON mode.
Below is an example of how you might set up your API call in Python:
import openai
openai.api_key = 'your-api-key'
response = openai.Completion.create(
model="text-davinci-003",
prompt="Translate the following English text to French: 'Hello, how are you?'",
response_format={ "type": "json_object" }
)
print(response)
In this snippet:
- Replace
'your-api-key'
with your actual OpenAI API key. - The
response_format
parameter is being set to a Python dictionary that represents the JSON object{ type: "json_object" }
. model
should be set to whichever AI model you’re intending to use (as of my last update, “text-davinci-003” was a current model, but you should check for the latest available).prompt
is whatever you want to send to the model.
When you make this API call, the response you get will be formatted as a JSON object, which is often more structured and easier to parse than plain text, especially if you’re dealing with complex data or you want to integrate the response into other systems that work with JSON.
Be sure to check the latest API documentation or any updates to the API, as parameters and settings can change.
Does not work for either that model or GPT 3.5 or 4, gives InvalidRequestError
That was from GPT-4 so might be wrong, but sounded right
Your first message mentioned it, so maybe it’s just not hitting the right model yet? Might give it a bit of time (few hours?) and try again. Are the other features working for you?
GPT-4 Vision is working, which is nice. I’ll try the JSON mode again in a couple hours.
According to the documentation, make sure that you change the model name to gpt-3.5-turbo-1106
and have somewhere in your prompt that you want it to return a JSON output. The API will throw an error if it doesn’t see the string “JSON” in the prompt.
Great first post! Hope you stick around our growing community.
Over 400,000+ members so far!
I’m trying to use the JSON mode with the new gpt-4-vision-preview
but it seems it doesn’t work.
I’m getting 400 Bad Request:
{
"error": {
"message": "1 validation error for Request\nbody -> response_format\n extra fields not permitted (type=value_error.extra)",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
gpt-3.5-turbo-1106
does indeed work – thanks!
But what I really need is it for to work with Vision (like @krthr). I guess I’ll wait a bit
Based on the documentation for gpt-3.5-turbo.
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-3.5-turbo-1106",
"response_format": {"type": "json_object"},
"messages": [
{
"role": "system",
"content": "You are a helpful assistant that extracts data and returns it in JSON format."
},
{
"role": "user",
"content": "What is the weather like in Boston?"
}
],
"functions": [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
],
"function_call": "auto"
}'
So there is no support for gpt-4-1106?
{
“model”: “gpt-3.5-turbo-1106”,
“messages”: [
{
“role”: “user”,
“content”: “What is the weather like in Boston?”
}
],
“response_format”: “json_object”
}
The api works (model="gpt-4-1106-preview", response_format={ "type": "json_object" }
) but I’m not getting reliable results. There’s this error:
BadRequestError: {'error': {'message': "'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'."
which is interesting, but if I’m not direct about ‘as json’ and just include it somewhere random it spins for 1m+ and then returns a couple hundred ‘\n’ (guessing this is some sort of regex state-machine loop it gets stuck in when it wants to break json formatting)
import os
from openai import OpenAI
from google.colab import userdata
messages = [
{"role": "system", "content": "List of months that have 30 days in json"},
]
client = OpenAI(api_key=userdata.get('OPENAI_API_KEY'))
response = completion = client.chat.completions.create(
model="gpt-4-1106-preview",
messages=messages,
response_format= { "type":"json_object" }
)
print(completion.choices[0].message.content)
You have to have the response format AND you have to put the word json in your prompt
+1 it looks like function calling and using the response_format="json_object"
feature don’t work specifically when you’re using the vision model right now:
from openai import OpenAI
client = OpenAI()
functions = [
{
"name": "classify_animal",
"description": "Classify the animal in a given image",
"parameters": {
"type": "object",
"properties": {
"type": {
"type": "string",
"description": "The type of animal",
},
"unit": {"type": "string", "enum": ["dog", "cat", "fish"]},
},
"required": ["type"],
},
}
]
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What animal is in the picture?"
},
{
"type": "image_url",
"image_url": base64_urlencode_image(dog)
}
]
}
]
completion = client.chat.completions.create(
model="gpt-4-vision-preview",
response_format="json",
messages=messages,
functions=functions,
function_call="auto"
)
print(completion)
BadRequestError: Error code: 400 - {'error': {'message': '3 validation errors for Request\nbody -> function_call\n extra fields not permitted (type=value_error.extra)\nbody -> functions\n extra fields not permitted (type=value_error.extra)\nbody -> response_format\n extra fields not permitted (type=value_error.extra)', 'type': 'invalid_request_error', 'param': None, 'code': None}}
Noticed the same issue. Only the gpt-4 turbo model can accept response_format
field.
I cann’t get JSON mode to work with gpt-4-vision-preview, but gpt-3.5-turbo-1106 and gpt-4-1106-preview work.
Maybe I need gpt-4-vision-1106-preview which doesn’t exist in my model list.
One thing I noticed: the message itself needs to contain the word “JSON.” Try appending that to the beginning/end of the message payload, or even embed it in the image itself!
If the answer hasn’t happened using the latest, I got it working like this tonight: This was with the new vision gpt-4.
Function to classify an image using the OpenAI API
def classify_image_with_openai(api_key, image_path):
base64_image = encode_image_for_api(image_path)
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
payload = {
"model": "gpt-4-vision-preview",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Provide a brief summary of this image, highlighting the main objects, the setting, any apparent activities, the mood, and notable colors or styles present. Return a detailed description of the image, including at least 10 objects present in the image as a list with the type of object and confidence level. In this format [object: percentage confidence, object2: percentage confidence]"
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
}
}
]
}
],
"max_tokens": 300
}
response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
if response.status_code == 200:
logging.info("Image classification results received.")
return response.json()
else:
logging.error(f"OpenAI API error: {response.text}")
return None
Do you mind posting the entire request including prompt? I still get the same error even if I mention JSON in system as well as message. response_format={“type”: “json_object”} throws an error. I updated to latest openai 1.1.1.