Dated model GPT-4o-2024-05-13 vs updated model GPT-4o

Is it better to use a dated model like GPT-4o-2024-05-13 for production instead of the regularly updated GPT-4o?
I rely on very specific prompts tailored to my use case, so frequent updates to the gpt-4o model will affect my outputs adversely?
Additionally, I’ve read that dated models are recommended for production due to optimized APIs—can anyone confirm this?

I am inclined to the same conclusion, in any case, my practice shows that gpt4o periodically becomes very “stupid”, that is, earlier it answered a similar question noticeably more “adequately”, the answers were much better tied to a deeper context, but now the context with which the model works is noticeably reduced, despite the fact that the model remembers the earlier context, it does not use it, but creates answers without taking into account the earlier context. It is unclear whether this is related to updates or simply under high loads it automatically switches to the gpt4o mini model.

1 Like

I think it’s better to stick with the same dated model for production. This allows us to thoroughly test and understand its behavior, avoiding unexpected surprises. A model that’s constantly being tested and updated isn’t suitable for a production environment.

*PS: If the updates were consistently improvements, it would be a different story. But that’s definitely not the case.

3 Likes

The newer models are more effective cost and inference rates, so essentially they want you to and move their compute into the latest models for obvious reasons as part of their iterative deployment and you should have automated migration and testing within your application in place as they release slightly better models to remain number one against Google (anthropic is friendly fire) They are a contender without assets to create artificial investment in AI generally and add “pressure” to googles AI investment because all AI investment by any party is AGI fodder