Gpt-5, gpt-5-mini, and gpt-5-nano now available in the API

I absolutely agree. Trying to keep this as positive as possible but the excitement in my team for ChatGPT5 died as soon as the other models went away. I am optimistic that GPT-5 is going to be an improvement, many seem to say so already, but by hard-removing access to prior versions for paid users, 100% of the positivity of the update has been erased. We now feel like we lost something, and regardless of how GPT-5 turns out. To wit, you do not take something away from paying customers and call it an upgrade, no matter how good your new product is, the lack of choice will make it a loss. If it costs a lot to offer both products, fine, that’s what credits are for.

In addition, some of us prefer choice. I now have a blackbox and what I get out of it is now either good, or bad. I have no control, nor any way to improve the blackbox. Having automatic mode (e.g. 5.0 selecting your model) is a big plus, but sometimes the customer needs to directly (and firmly) control how the product responds.

2 Likes

Apologies, Paul, can you elaborate and clarify here? Surely one-shotting something is a good thing?

1 Like

You have to ask for it. If you were to follow one of the first links for “prompting”:

By default, GPT-5 in the API does not format its final answers in Markdown

adherence to Markdown instructions specified in the system prompt can degrade over the course of a long conversation. In the event that you experience this, we’ve seen consistent adherence from appending a Markdown instruction every 3-5 user messages.

EG: as is typical, context confusion results in reversion to post-training.

I went overboard with attention-grabbing input instead of their brief prompt.


Please clarify if you are discussing ChatGPT, and if it is a concern relevant to the developer forum, as feedback given here is only talking with other AI product users.

If you like chatting with previous models, check your ChatGPT settings:

He means: not giving user/assistant turn examples for in-context learning:

2 Likes

Agree. I span up codex-cli to make some functional improvement to an existing project and it ons-shotted every change. Small project, mind, but still impressed so far.

Thanks Jay. Oh man, the lingo. Lol. Surely the minimum number of goes is one? :sweat_smile:

2 Likes

Dozens with GPT-2! :wink:

Seriously, though, yeah, one-shot example (maybe better way to term it?) has been working better in GPT-5…

I just posted a video where someone “one-shotted” meaning they got their code in one go… as a “writer” I should be more better with the words and stuff! :wink:

Need to upgrade all my code too, but alas…

1 Like

I prefer “one-shot example”. In any case. It’s good! :sweat_smile::partying_face:

1 Like