Does choosing GPT4o as a model for an Open AI assistant always ensure you're using the latest version of GPT4o?

The short name is an alias, a pointer. It is described as a “currently recommended” AI model.

Using gpt-4o currently runs your API request against the model, gpt-4o-2024-08-06. Besides structured outputs, it is based on instruction hierarchy, giving low priority of instruction-following to roles such as tool returns, prior assistant responses, below user inputs, below system instructions, then below OpenAI’s own training.

gpt-4o-2024-11-20 was rather quietly released at the same pricing. It has no new features exposed. What hints we have about it is that it is described as more “creative” or “conversational”, and that it does write more exhaustively.

gpt-4o-2024-05-13, the initial release of the series, does not have the features of accepting and enforcing structured response_format as an API parameter. It costs a bit more to use, and can produce considerably different results, often reversion-worthy.

The transition of switching the pointer to 08-06 took over a month, but seemed inevitable once there were no breaking bugs like gpt-4-turbo-1106 had. I expect that the newer model is also viewed as advantageous and implementable.

OpenAI continues to alter “snapshot” models of the past, moreso than Azure deployments, so a recommendation to build your application on a specific version doesn’t make its operation absolutely stable, but can let YOU decide when to make the larger version switch.

3 Likes