O1 models do not support 'system' role in chat completion?

I just tried slotting the o1-preview and o1-mini models into my chat completion and code and am getting this error on both:

Error occurred (getChatCompletionOpenAI): Unsupported value: ‘messages[0].role’ does not support ‘system’ with this model.

For all the other OpenAI models, this is how I initialize system and user roles:

		// Initialize the $systemMessage array with the system message
		$systemRole = array(
			array("role" => "system", "content" => $systemMessage)
		);
|||// Define the new user message (question + context docs)|
|---|---|---|
|||$newUserMessage = array(role => user, content => $prompt);|

This is no longer the case?

Here is the only documentation given:

curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d "{
    \"model\": \"o1-preview\",
    \"messages\": [
      {
        \"role\": \"user\",
        \"content\": \"Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format.\"
      }
    ]
  }"

https://platform.openai.com/docs/guides/reasoning/quickstart?lang=curl

What am I missing here?

3 Likes

System messages are currently not supported under the o1 models.

Beta limitations

  • Message types: user and assistant messages only, system messages are not supported.

It seems like this will change in the future though.

4 Likes

OK, thanks.

FYI for anybody else looking at this:

  • Error occurred (getChatCompletionOpenAI): Unsupported parameter: ‘max_tokens’ is not supported with this model. Use ‘max_completion_tokens’ instead.
  • Error occurred (getChatCompletionOpenAI): Unsupported value: ‘temperature’ does not support 0 with this model. Only the default (1) value is supported.

Not sure if this is mentioned anywhere.

Oops. Guess I should read the documentation.

  • Other: temperature, top_p and n are fixed at 1, while presence_penalty and frequency_penalty are fixed at 0.
2 Likes

Yep, some examples of short-circuits I’ve added, because there’s no logical encompassing way to transition between models mid-chat.

            if self.aimodel.currentText().startswith("o1") and role == "system":
                role = "user"
                continue
            # Get the selected model name
            model_name = self.aimodel.currentText()

            # Initialize the parameter dictionary
            params = {
                'model': model_name,
                'messages': full_messages
            }

            # If the model does not start with "o1", add additional parameters
            if not model_name.startswith("o1"):
                params.update({
                    'temperature': softmax_temperature,
                    'max_tokens': response_tokens,
                    'top_p': top_p_set
                })

That is - when you’ve also put back some non-streaming calling and display methods, not offered functions and stripped them from chat history with facsimiles for context, stripped images out of messages, upped timeouts, greyed out retry on particular message criteria, etc.

1 Like

Did you consider that maybe it is not supported because the model itself is backend with system prompt? I think this is the reason why

1 Like

Correct. Replacing “system” with “user” should resolve that error for the o1 models. Note, that many other parameters are not supported though that may need to be changes/refactored in your code (i.e. streaming is not supported either).

1 Like

On the API, you can be assured that your entire input context is doubted and examined against rules and policy at every turn. Telling the AI it is a new “You are”, or the assistant introducing and talking about its new, unbelieved, name just invokes more reasoning tokens, specifically categorizing or countering “persona” or “role-play”, at output pricing.

I’m just realizing now that o1-preview is fundamentally broken as an API, and missing a ton of stuff:

  • no system prompt
  • no temperature setting
  • no response format (json-object)

makes this model completely unusable for me.

It’s in the planning for at least some of these features to become available once the models are out of the beta phase. See here for reference.

1 Like

You can try using “assistant” as a role.

1 Like

In the docs it says: