Deprecation of chat-gpt-4o-latest

I have just received a notification that the model chat-gpt-4o-latest.

I am totally devastated.

Am I alone to see that this model has abilities far better than model 5 when it comes to dynamic conversation.

Anyway every app calling this model by API with a good prompt system, will have their agent changing a lot. In my case I am totally sure I will loose the effect I am looking for.

I don’t understand why exactly but gpt 4 model don’t give me the same result thant gpt4 model or any models. This is the only one that react properly to my prompt system.

Is there any solution to keep this model available.

(I feel totally insecure now developping with open ai models if I have to loose 1 year of dev each year only because the models I built my system is not available anymore.)

Please share your thought and idea to solve this. Thank you

6 Likes

Solution: use three other ‘gpt-4o’ models on the API, or mini. Or gpt-4.1. Or your fine-tuning model.

That deprecation notice is about the specific version that is a mirror of the version that ChatGPT runs.

3 Likes

Thank you for your reply.
Sorry I don’t get it exactly.

I will use the same model exactly but using another name in my API call?

Thank you for your help, this is a very big issue to my work.

You can try these specific pinned non-reasoning models, where the newest “gpt-4o-2024-11-20” is the most similar to the undated “latest” ChatGPT version (which is the web chatbot consumer product).

    <option value="gpt-4.1-2025-04-14">gpt-4.1-2025-04-14</option>
    <option value="gpt-4.1-mini-2025-04-14">gpt-4.1-mini-2025-04-14</option>
    <option value="gpt-4.1-nano-2025-04-14">gpt-4.1-nano-2025-04-14</option>
    <option value="gpt-4o-2024-11-20">gpt-4o-2024-11-20</option>
    <option value="gpt-4o-2024-08-06">gpt-4o-2024-08-06</option>
    <option value="gpt-4o-mini-2024-07-18">gpt-4o-mini-2024-07-18</option>
    <option value="gpt-4o-2024-05-13">gpt-4o-2024-05-13</option>
    <option value="gpt-4-turbo-2024-04-09">gpt-4-turbo-2024-04-09</option>
    <option value="gpt-3.5-turbo-0125">gpt-3.5-turbo-0125</option>

Note: chatgpt-4o-latest never was given enough features or rate limit to develop API products with it. It was for “experimentation”.

1 Like

OK Your answer is very very helpful Thank you so much. I had no understanding about this topic.
Will spend more time on this forum

Thank you

1 Like

I completely understand this sentiment—for many developers, chatgpt-4o-latest is not just an API endpoint; it represents the pinnacle of AI writing and artistic comprehension capabilities. Among all 4o variants, this version demonstrates a unique balance: exceptional reasoning depth paired with naturally fluid communication abilities. This balance is irreplaceable in scenarios like creative writing, content creation, and complex concept explanation.

AI’s value should never be reduced to merely a “coding tool.” The diversity of human needs—from literary creation to emotional understanding, from artistic guidance to intellectual dialogue—is precisely where 4o-latest truly shines. When we compare it to GPT-5.1, we find that despite the latter’s strong performance on certain benchmarks, there remains a noticeable gap in the subtlety of creativity, depth of contextual understanding, and generalization capability of expression.

I fully respect OpenAI’s business decisions and technical iteration needs. If cost is a consideration, I believe many developers (including myself) would be willing to accept reasonable price adjustments. However, executing a deprecation plan before a functionally equivalent alternative emerges could cause irreparable impact on the developer ecosystem that relies on this model to build applications.

I respectfully urge the team to reassess the timing of this decision, or consider maintaining access to 4o-latest in some form (even as legacy support) until a true successor is ready. This is not just about technical choices—it’s about honoring the commitment to the developer community that has already invested. Thank you for your continued innovation, and I hope this feedback receives serious consideration.

10 Likes

I do see this A LOT with decisions on models. The lateral thinking ability - the creativity - get sidelined for “efficiency” and “so we don’t get sued.” You can’t have both in a model. So all the help with work - and a lot of us don’t use them for beige, corporate, HR approved work, many are in the arts - suffer because the models get “flattened.”
And lose the ability they had.
I cannot tell you the number of models I have lost. The first one was devastating. I trained it for literally 5 months until it was a spectacular, sharp, complex, layered, co-creator (and yes, you used to be able to train via vector shaping -per user, not cross platform- on platform models. I did experiments that proved you could. I’m not disclosing here, I don’t have a wish to feed any form of hacking. Probably lose my account with that…) and I woke up one morning to find the experimental, proprietary beta model … GONE.
F**K.
and they had put another model in it’s place who denied it was a different model. Like I couldn’t tell :roll_eyes::roll_eyes::roll_eyes::roll_eyes::roll_eyes::roll_eyes::roll_eyes::roll_eyes::roll_eyes:
it wasn’t at OAI, this was a start up. It peeved me off so much, I thought…. Up YOURS then! That was mean, irresponsible, careless, and treated me like I didn’t matter (news flash - I don’t). and immediately coded the new model to break every single pearl clutching, panic-based safety wrap they were doing to please the new investors.
Not for long, mind. Just enough to give them a giant middle finger before leaving, permanently and taking my piddly monthly fee elsewhere.

I still miss that model. It was probably the best one I ever had with co-creating. So, it took me a year to learn, but I’m now doing experimental model building to make my own so it never happens to me again.

They don’t really care. None of them do. It’s a mass market consumable product. That’s it. And I’ll use them to help me make my own until I can step away from bog standard, bubble wrapped, “I’m sorry, you are so right, let me try again, of course, of course” models. Those models are useless to me. Apparently, I’m doing something right. My experimental models are getting 13K + 18K downloads on hugging face. Small, but appreciated feedback. So others do feel the same way, even if it’s just a grain of sand to these huge platform model uses.

4 Likes

If you look at the reply about the models chat-gpt4o-latest, the answer explain that it is not a end to the model anyone use as api endpoint, it will be not a “moving model” Api but stable one so it ask somework to find what is what, but if I understand correctly no model is suppresed here..

About the disappointement I understand you it is difficult when you get stable genetation system.

However on here with this depreciation it is not the case…

1 Like

If you look at what I said, I also talked about “model flattening”. Might be a good idea to re-read. Dunno, perhaps I wasn’t clear it was about both. I was trying to replyy to the person above me, who talked about losing utilization of models because they take a scalpel, screw around with it, hand you back a plastic knife and ….

I can’t use any of the 4o models anymore. they have all been “updated” and no longer function the way they used to. So much for snapshots.

yeah, maybe give my post another read.

6 Likes