I see that the gpt-5.1 and gpt-5.1-mini models have been released for the API. Any guess on when to expect a new gpt-5.1-nano model?
Hi!
We are still waiting for the 5.1-mini release as well. Right now we only have 5.1-mini-codex plus gpt-5-mini and 5-nano as the available options.
I’ll share any updates as soon as I hear more but I expect it’s not far away.
Much appreciated. Note that the npm library (v6.9.0) already lists gpt-5.1-mini as an available option for model selection. I’d suggest that if you are light on formal online documentation, perhaps your team could put a little extra energy into ensuring that the typedefs in your libraries are consistent with actual implementation. I should note that Google’s documentation and libraries are horribly complex compared with OpenAI’s and that is strong differentiator for you among developers like me.
One more point on these typedefs…
I’ve noticed that several options (such as reasoning, summaries, temperature, etc.) are model-dependent. That means that when switching between models, keeping some of the other arguments will result in error responses. It becomes a game of trial-and-error to figure out what is and is not supported on any given model. You could GREATLY improve this experience if you baked these inter-dependencies into the typedefs in your library – so that it became a compile-time check to see that all choices are consistent. Just a suggestion.
Actually: yes.
That’s something that would improve the developer experience a lot.
I’ll make sure to pass this on.
You can throw in, “is ID verified”; is tier xx, has a secret “model trust tier”, etc to what cannot be answered by API code that attempts to validate and block you from even making the call.
- o4-mini + stream: yes for me, maybe no for you
OpenAI could provide an endpoint similar to the models endpoint and scripted data that they use to run the dashboard chat, giving models with a prescription for each. You’ll see “chat” even gets whether “none” or “minimal” effort is supported per-model. But: “NO APIs nor data for you!”, despite several forum topics to that effect.
You can group behaviors and parameter gates by:
- is it a reasoning model
- is it a gpt-5 model
- is it a gpt-5.1 model
- is it a computer use or code model (different reasoning summary, etc)
So can you actually use gpt-5.1-mini in API calls? Did you notice some difference in latency or quality of the answer specific of this model? What do you think about the difference of gpt-5-mini and gpt-5.1-mini?
Would be good if they at least announce there wll be a new nano model with a release, maybe we’re waiting for something that wont happen. Usually they will show in Playground but GPT-5.1-Pro is not shown in playground yet but is released.