It supports a new minimal reasoning effort for faster answers, a verbosity parameter to control the length of answers, and a context window of 400K tokens.
Using gpt-5-chat-latest I didn’t have the greatest start in “production”, (don’t worry nothing critical)
openai.NotFoundError: Error code: 404 - {'error': {'message': 'functions is not supported in this model. For a list of supported models, refer to https://platform.openai.com/docs/guides/function-calling#models-supporting-function-calling.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
then with gpt-5:
openai.BadRequestError: Error code: 400 - {'error': {'message': "Unsupported value: 'temperature' does not support 0.2 with this model. Only the default (1) value is supported.", 'type': 'invalid_request_error', 'param': 'temperature', 'code': 'unsupported_value'}}
Guess I need to review the docs! A bit of a migration to do …
(“functions” is now fully deprecated, you must use “tools” etc. … one of the risks of vibe coding )
What about “The requested model ‘gpt-5’ cannot be used with the Assistants API.” Although the model description states that it supports the Assistants API (v1/assistants).
With this alternate CommonMark code fence you might not be aware of, then the forum can contain my prompt suggestion:
Use triple-tilde as a code fence, along with the code type as the first line before the code.
Otherwise the markdown render corrupts it.
Example:
~~~python
print("Hello GPT")
~~~
Additionally, the “settings” within chat playground allows temperature - and then the call is refused, as this is no longer under your control. On existing reasoning models, this is dropped by model selection.
It is a “reasoning” model, on Responses, always demanding “store” to scarf up and persist your data, with no corresponding “delete” in logging. The playground doesn’t play the “previous_response_id” game; it should be reusing “encrypted reasoning” or drop it if not requested with an “include”.
This thing does critical analysis like nothing I’ve ever seen. Asked it to perform a detailed analysis of a PDF (a bit biased). While gpt-4-1 just analyzed what the doc was about (including images), gpt-5-mini recognized the bias and actualy critiqued it
The improvement in coding ability from 4 to 5 is insane. Found an issue in the playground though - when generating code the UI is not recognising the entire output as code (and thus not making it easily copyable) - it’s only wrapping a chunk of code in the copyable section.
I would like to suggest a more user-friendly approach to model transitions in ChatGPT, especially now with the arrival of GPT-5.
My suggestion:
Free users: Please keep GPT-4.1 available temporarily, as it provides higher quality than the mini models. Even if it is more expensive for you, it would only be until the launch of GPT-5 Mini, avoiding a downgrade in experience for free users.
Plus users: Offer GPT-4.1, O3 (reasoning model), and GPT-4.5. This ensures real variety, covers all user profiles, and avoids redundancy with very similar models.
As soon as GPT-5 Mini becomes available for free users, you can replace 4.1 with 5 Mini, maintaining high quality for everyone.
The GPT-5 introduction on the Models page clearly states that it supports the Assistants API, but this is not actually the case. Could you please look into this?
I absolutely agree. Trying to keep this as positive as possible but the excitement in my team for ChatGPT5 died as soon as the other models went away. I am optimistic that GPT-5 is going to be an improvement, many seem to say so already, but by hard-removing access to prior versions for paid users, 100% of the positivity of the update has been erased. We now feel like we lost something, and regardless of how GPT-5 turns out. To wit, you do not take something away from paying customers and call it an upgrade, no matter how good your new product is, the lack of choice will make it a loss. If it costs a lot to offer both products, fine, that’s what credits are for.
In addition, some of us prefer choice. I now have a blackbox and what I get out of it is now either good, or bad. I have no control, nor any way to improve the blackbox. Having automatic mode (e.g. 5.0 selecting your model) is a big plus, but sometimes the customer needs to directly (and firmly) control how the product responds.