Why am I getting the following error when trying to get a response from:
o1
o1-mini
o3-mini
Sending the following: {"model":"o1-mini","messages":[{"role":"developer","content":"# Instructions\nYour name....
"{
"error": {
"message": "Unsupported value: 'messages[0].role' does not support 'developer' with this model.",
"type": "invalid_request_error",
"param": "messages[0].role",
"code": "unsupported_value"
}
}"
The documentation says that developer is a supported thing.
What gives?
Answer
I was accidentally using the stream: true tag on the tests
I worked through all the model classes, and here’s an overall look at what is supported and what will be rejected by the API:
Parameter
o3-mini
o1
o1-preview
o1-mini
gpt-4o/mini
gpt-4-turbo
gpt-4o-audio
chatgpt-4o
messages/system *
Yes
Yes
No
No
Yes
Yes
Yes
Yes
messages/developer *
Yes
Yes
No
No
Yes
Yes
Yes
Yes
messages/user-images
No
Yes
No
No
Yes
Yes
No
Yes
tools (as functions)
Yes
Yes
No
No
Yes
Yes
Yes
No
functions (legacy)
Yes
Yes
No
No
Yes
Yes
Yes
No
response_format-object
Yes
Yes
No
No
Yes
Yes
No
Yes
response_format-schema
Yes
Yes
No
No
Yes
No
No
No
reasoning_effort
Yes
Yes
No
No
No
No
No
No
max_tokens
No
No
No
No
Yes
Yes
Yes
Yes
max_completion_tokens*
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
temperature & top_p
No
No
No
No
Yes
Yes
Yes
Yes
logprobs
No
No
No
No
Yes
Yes
No
Yes
xxx_penalty
No
No
No
No
Yes
Yes
Yes
Yes
logit_bias (broken!)
No
No
No
No
Yes
Yes
?
Yes
prediction
No
No
No
No
Yes
No
No
No
streaming:True
Yes
No
Yes
Yes
Yes
Yes
Yes
Yes
Cache discount
Yes
Yes
Yes
Yes
Yes
No
No
No
* These parameters have compatibility translations to legacy parameters. Currently, “system” and “developer” both serve as the single “authority message” that is appropriate for the model (and thus, a new role name was not needed).
I’d make a “master post” maintaining this info, but new forum topics with “howto” documentation simply scroll out of view after 100 more complaints of something going down or models not working.
Sending a system message, image, or schema and having it silently dropped instead of producing an error would be a bad thing ™
Temperature and other sampling parameters are likely tuned for the reasoning application - whether writing new creative poems each call, or writing error-free code. No effect when you send them anyway would result in frustration.
The logit_bias parameter was only recently blocked from o1-preview. And in fact logit_bias is doing ABSOLUTELY NOTHING on ANY models. So you get your wish.