With gpt-3.5-turbo-1106 when tools are passed to the chat completions API with JSON mode the model will output 2 JSON objects like the following :
{
"message": "Hello! How can I assist you today?"
}
{
"message": "Hello! How can I assist you today?"
}
or
{
"current_temperature": "67°F",
"average_temperature": "On average, the warmest months in Los Angeles are July through September, with average temperatures ranging from the mid-70s to mid-80s (°F). January through March are the coolest months, with average temperatures ranging from the low 50s to mid-60s (°F)."
}
{
"city": "Los Angeles",
"current_temperature": "67°F"
}
If the tools are not passed to the API the model will output a single JSON object which is what I want.
gpt-4-1106-preview doesn’t have this issue and works as expected.
Could it be an issue with the prompt if so how can I fix it.
I’ve tried giving it an example which worked but was inconsistent and the model would leak the example :
{"message": "Hello! How can I assist you today?"}
{"message": "Your response goes here"}
When used on a larger system prompt with 6 examples the model would always return 2 JSON objects, often times different from each other.
This problem could probably be an issue with the prompt which I tried to fix but gpt-3.5-turbo-1106 is really stubborn and doesn’t want to work, is there any guidlines for prompting gpt-3.5-turbo-1106 in JSON mode to achieve consistent results ?
This is an example code that I’m using.
from openai import OpenAI
from openai.types.chat import ChatCompletionToolParam
from openai.types.shared_params import FunctionDefinition
client = OpenAI()
function = FunctionDefinition({'name': 'duckduckgo_search', 'description': 'A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.',
'parameters': {'type': 'object', 'properties': {'query': {'description': 'search query to look up', 'type': 'string'}}, 'required': ['query']}})
toolc = ChatCompletionToolParam({'type': 'function', 'function': function})
print(toolc)
completion = client.chat.completions.create(
messages=[
{
"role": "system",
"content": "You are a helpful assistant that returns json of your response in a message key",
},
{
"role": "user",
"content": "hello",
}
],
model="gpt-3.5-turbo-1106",
#model="gpt-4-1106-preview",
response_format={"type": "json_object"},
tools=[toolc]
)
print(completion.choices[0].message.content)