I just tried slotting the o1-preview and o1-mini models into my chat completion and code and am getting this error on both:
Error occurred (getChatCompletionOpenAI): Unsupported value: ‘messages[0].role’ does not support ‘system’ with this model.
For all the other OpenAI models, this is how I initialize system and user roles:
// Initialize the $systemMessage array with the system message
$systemRole = array(
array("role" => "system", "content" => $systemMessage)
);
|||// Define the new user message (question + context docs)|
|---|---|---|
|||$newUserMessage = array(role => user, content => $prompt);|
This is no longer the case?
Here is the only documentation given:
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d "{
\"model\": \"o1-preview\",
\"messages\": [
{
\"role\": \"user\",
\"content\": \"Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format.\"
}
]
}"
Error occurred (getChatCompletionOpenAI): Unsupported parameter: ‘max_tokens’ is not supported with this model. Use ‘max_completion_tokens’ instead.
Error occurred (getChatCompletionOpenAI): Unsupported value: ‘temperature’ does not support 0 with this model. Only the default (1) value is supported.
Not sure if this is mentioned anywhere.
Oops. Guess I should read the documentation.
Other: temperature, top_p and n are fixed at 1, while presence_penalty and frequency_penalty are fixed at 0.
Yep, some examples of short-circuits I’ve added, because there’s no logical encompassing way to transition between models mid-chat.
if self.aimodel.currentText().startswith("o1") and role == "system":
role = "user"
continue
# Get the selected model name
model_name = self.aimodel.currentText()
# Initialize the parameter dictionary
params = {
'model': model_name,
'messages': full_messages
}
# If the model does not start with "o1", add additional parameters
if not model_name.startswith("o1"):
params.update({
'temperature': softmax_temperature,
'max_tokens': response_tokens,
'top_p': top_p_set
})
That is - when you’ve also put back some non-streaming calling and display methods, not offered functions and stripped them from chat history with facsimiles for context, stripped images out of messages, upped timeouts, greyed out retry on particular message criteria, etc.
Correct. Replacing “system” with “user” should resolve that error for the o1 models. Note, that many other parameters are not supported though that may need to be changes/refactored in your code (i.e. streaming is not supported either).
On the API, you can be assured that your entire input context is doubted and examined against rules and policy at every turn. Telling the AI it is a new “You are”, or the assistant introducing and talking about its new, unbelieved, name just invokes more reasoning tokens, specifically categorizing or countering “persona” or “role-play”, at output pricing.