`developer` role not accepted for o1/o1-mini/o3-mini

Why am I getting the following error when trying to get a response from:

  • o1
  • o1-mini
  • o3-mini
Sending the following: {"model":"o1-mini","messages":[{"role":"developer","content":"# Instructions\nYour name....
"{
  "error": {
    "message": "Unsupported value: 'messages[0].role' does not support 'developer' with this model.",
    "type": "invalid_request_error",
    "param": "messages[0].role",
    "code": "unsupported_value"
  }
}"

The documentation says that developer is a supported thing.

  • What gives?

Answer

  • I was accidentally using the stream: true tag on the tests

o1 model

  • developer works :heavy_check_mark:

o1-mini

  • developer DOES NOT work :x:

o3-mini

  • developer works :heavy_check_mark:
  • streaming worksâś”
1 Like

Similar issue being brought up in this thread: How is Developer Message Better than System Prompt - #8 by eeseljose

Your right - and it looks like it stems from the documentation not being correct.

For anyone looking for the answer

  • You can’t use developer yet

@multitechvisions o1-mini does not support developer messages, only o1 and o3-mini. Can you try with "model":"o1"?

2 Likes

o1 & o3-mini worked

Cheers mate

  • On previous attempts, I had the stream: true tag accidentally on there.

Is there a place where we can cleanly see what restrictions each model has?

For example: where in the documentation does it say that these reasoning models don’t use the temperature setting?

It would be very helpful if the model descriptions had a clear bulleted list of the restrictions and API parameters/features that don’t work.

3 Likes

You know what would be really helpful:

  • For these undocumented api parameters that are not supported
  • STOP throwing an error
  • instead, let things through
    • ignore the unsupported parameters

This way things will at least go through. :pray:

  • Would make all this fiddly stuff WAY easier.

Like… it’s kinda rediculous to have to resort to stuff like this:

I worked through all the model classes, and here’s an overall look at what is supported and what will be rejected by the API:

Parameter o3-mini o1 o1-preview o1-mini gpt-4o/mini gpt-4-turbo gpt-4o-audio chatgpt-4o
messages/system * Yes Yes No No Yes Yes Yes Yes
messages/developer * Yes Yes No No Yes Yes Yes Yes
messages/user-images No Yes No No Yes Yes No Yes
tools (as functions) Yes Yes No No Yes Yes Yes No
functions (legacy) Yes Yes No No Yes Yes Yes No
response_format-object Yes Yes No No Yes Yes No Yes
response_format-schema Yes Yes No No Yes No No No
reasoning_effort Yes Yes No No No No No No
max_tokens No No No No Yes Yes Yes Yes
max_completion_tokens* Yes Yes Yes Yes Yes Yes Yes Yes
temperature & top_p No No No No Yes Yes Yes Yes
logprobs No No No No Yes Yes No Yes
xxx_penalty No No No No Yes Yes Yes Yes
logit_bias (broken!) No No No No Yes Yes ? Yes
prediction No No No No Yes No No No
streaming:True Yes No Yes Yes Yes Yes Yes Yes
Cache discount Yes Yes Yes Yes Yes No No No
  • * These parameters have compatibility translations to legacy parameters. Currently, “system” and “developer” both serve as the single “authority message” that is appropriate for the model (and thus, a new role name was not needed).

I’d make a “master post” maintaining this info, but new forum topics with “howto” documentation simply scroll out of view after 100 more complaints of something going down or models not working.


Sending a system message, image, or schema and having it silently dropped instead of producing an error would be a bad thing ™

Temperature and other sampling parameters are likely tuned for the reasoning application - whether writing new creative poems each call, or writing error-free code. No effect when you send them anyway would result in frustration.

The logit_bias parameter was only recently blocked from o1-preview. And in fact logit_bias is doing ABSOLUTELY NOTHING on ANY models. So you get your wish.

4 Likes

Wow what an epic summary! :muscle: Thanks

2 Likes