Ah, now they changed the API docs:
From “when calling gpt-4-vision-preview or gpt-3.5-turbo,”
to: "“when calling gpt-4-1106-preview or gpt-3.5-turbo-1106,”
“you can set response_format to { type: “json_object” } to enable JSON mode.”
So, gpt-4-vision-preview isn’t supposed to support JSON mode.
It would be cool if instead of json_object, we could use the same json schema as functions. Then, the presence of “response_format” would trigger json mode and the content could be used as an automatic system prompt. That way the Assistant system prompts are cleaner, and we wouldn’t have to use awkward function calls to specify reliable output formats.
def chat():
completion = openai.chat.completions.create(
model="gpt-4-1106-preview",
response_format={"type": "json_object"},
messages=[
{"role": "user", "content": "translate this message to vietnamese, thailand, lao, cambodia: Hello, nice to meet you, reply in json object with key is the language code"}
]
)
print(completion.choices[0].message)
this function work well for me
response:
{
"vi": "Xin chào, rất vui được gặp bạn",
"th": "สวัสดีครับ/ค่ะ ยินดีที่ได้พบคุณ",
"lo": "ສະບາຍດີ, ດີໃຈທີ່ໄດ້ຮູ້ຈັກ",
"km": "ជំរាបសួរ, រីករាយដែលបានជួបអ្នក"
}
My current workaround is chaining two APIs together. First I fire off a request to the gpt-4-vision-preview with clear instructions on which details to extract in the image, but I don’t specify a JSON structure. Using that response I make a new request to the gpt-4-1106-preview with the "response_format": { "type": "json_object" } and a clearly defined JSON structure in the "role": "system". It isn’t bulletproof, some things might be lost between interpretations but it hasn’t failed me yet. I get parsable JSON with the expected content.
This thread really helped me get my ruby code working, so i thought i would contribute my code incase it helps others.
url = URI('https://api.openai.com/v1/chat/completions')
api_key = ENV['OPEN_AI_KEY']
puts "API Key: #{api_key}"
headers = {
'Content-Type' => 'application/json',
'Authorization' => "Bearer #{api_key}",
}
# Define the message parameters
messages = [
{
"role": "system",
"content": "Return the output as JSON for a workout plan for a user based on the details provided by user......"
},
{
"role": "user",
"content": "Please create a 30 day workout designed for the user details below..."
},
]
request = Net::HTTP::Post.new(url)
puts "request sent"
request.body = {
model: 'gpt-4-1106-preview',
messages: messages,
temperature: 1,
max_tokens: 3000,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0,
response_format: { "type":"json_object" }
}.to_json
puts "this is the request: #{request.body}"
request.initialize_http_header(headers)
http = Net::HTTP.new(url.host, url.port)
http.use_ssl = true
http.read_timeout = nil
# Send the request
begin
response = Net::HTTP.start(url.host, url.port, use_ssl: true) do |http|
http.request(request)
end
rescue StandardError => e
puts "Error during HTTP request: #{e.message}"
end
# Print the response
if response
puts response.inspect
if response.is_a?(Net::HTTPSuccess)
# Process response
puts response
else
puts "Error: #{response.code} - #{response.message}"
end
else
puts "No response received"
end
Can I fine tune a model with “assitant” prompt as JSON Object, if yes, how?
you just need to mention json in some form in your prompt, like: “format your output in a json format.”
Yes this seems like the most reliable option until they add the json_object into image api. thanks for posting your thoughts here which confirm mine!
I just released a new YouTube tutorial on how to get a response in JSON format.
Also, see my GitHub repository with full code for the tutorial.
Python:
import os
from openai import OpenAI
client = OpenAI()
OpenAI.api_key = os.getenv('OPENAI_API_KEY')
completion = client.chat.completions.create(
model="gpt-4-1106-preview",
messages=[
{"role": "system", "content": "You are a helpful assistant. Your response should be in JSON format."},
{"role": "user", "content": "Hello!"}
],
response_format={"type": "json_object"}
)
print(completion.choices[0].message.content)
Node:
const OpenAI = require("openai");
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
async function main() {
const completion = await openai.chat.completions.create({
model: "gpt-4-1106-preview",
messages: [
{
role: "system",
content:
"You are a helpful assistant. Your response should be in JSON format.",
},
{ role: "user", content: "Hello!" },
],
response_format: { type: "json_object" },
});
console.log(completion.choices[0].message.content);
Hey, I’m wondering if can I use the json mode with a finetuned model?
I’m trying to use the response_format in C# (Semantic Kernel library). This doesn’t work for me:
var requestSettings = new AIRequestSettings
{
ExtensionData = {
{ "max_tokens", completionMaxTokenCount },
{ "temperature", (double)temperature },
{ "top_p", topP },
{
"response_format", new Dictionary<string, object>
{
{ "type", "json_object" }
}
}
}
};
var textClassificationFunction = kernel.CreateSemanticFunction(prompt);
var result = await textClassificationFunction.InvokeAsync(text, kernel, requestSettings: requestSettings);
I’m using GPT-4-Turbo (1106 preview)
Any idea why this doesn’t work?
@splattne Did you get any error? Also, make sure to have the word ‘JSON’ in your system prompt.
@kjordan No error, the response is JSON, but delimited by backtick backtick backtick JSON.
I have the word JSON in the prompt, even with an example at the end of prompt.
Is there a way to have an assistant return a JSON using the response_format parameter? I understand how to do it with a chat completion GPT (parameter in the client.chat.completions.create method) but the same parameter is not available in the client.beta.assistants.create method.
How can I do that?
Hi and welcome to the Developer Forum!
I do not think this is a feature of the assistants system at the moment.
could you take a look at the following thread about the same issue with the BETA Assistants?
[new response_format field not permitted with gpt-4-vision-preview · Issue #469 · openai/openai-node ] ( github dot com/openai/openai-node/issues/469 )
Guys, may I ask what’s the impact of json_mode on techniques such as c-o-t and similar? Is this added as another layer on top of already generated, context-rich prompts, or it actually reduces model’s “working memory”?
Thanks, this worked after looking for a resolution for hours!
how can i add more parameter like i have nested list and arrays if i am using jsut openai json mode api,
will it not hallucinate anywhere, it will alyws return json , like i instucted ?
i got a good way doing that
getting response as structured format with new assistant creation(not the chat completion)
first u need to create the usual model, any model work fine i think i am using gpt3.5.1106
u can add custom instruction on running new thread as well
here’s a code example
assistant = client.beta.assistants.create(
name="weather assistant",
instructions=instruction,
model="gpt-3.5-turbo-1106",
)
thread = client.beta.threads.create()
message = client.beta.threads.messages.create(
thread_id=thread.id,
role="user",
content="hey"
)
run = client.beta.threads.runs.create(
thread_id=thread.id,
assistant_id=assistant.id,
instructions=instruction2
)
run = client.beta.threads.runs.retrieve(
thread_id=thread.id,
run_id=run.id
)
# run it after 5 seconds
messages = client.beta.threads.messages.list(
thread_id=thread.id
)
print(messages)
this is the basic code provided by openai
in instruction1 u can add any instruction u want the assistant to behave
in instruction 2 u can add this prompt
instruction2 = "you are a fucction that only respond as json and only return 3 variable in json, u cannot say anything else, u cannot chat with user directly, don’t say anything other that the json text requested
make that text in structured json format exactly like this
{‘weather’: ‘weather in the text’, ‘city’: ‘city in the text’, ‘suggestion’ : ‘any suggestion given by assistant’}
"
please give feedback if its working in ur case as well
it’s working in every case i have tested in