What are the Differences between gpt-3.5-turbo models

Hello,

I was looking at the models on OpenAI documentation page, they mention that there are multiple different models like gpt-3.5-turbo-16k, gpt-3.5-turbo-0301 … etc.

I can’t really tell what is the main difference between the models, I noticed that in usage 1106 can be used in Retrieval of data from files, but generally what is the main difference between each model, and when to use each of them.

Thanks in advance!

the -0613/-1106 models are essentially snapshots of the gpt-3.5-turbo model at that date. It allows for users to maintain a fixed state of model to query against, ensuring that the returned output is consistent.

The 16k models have a bigger input window of tokens allowed (16k). The normal ones usually mentioned have a input window of 8k

2 Likes

Thanks for your reply!

What do you mean by fixed state of model to query against?

OpenAI makes changes to the model time to time and sometimes adds new capabilities. For example, the 1106 model, which is the recent one, has improved features as they are mentioned in the description, while 0613’s performance level in these might not be at the same level

2 Likes

Okay, I got you.

But if so why are there two models gpt-3.5-turbo (which points to gpt-3.5-turbo-0613) as mentioned in the description and gpt-3.5-turbo-0613 [Legacy] which is basically the same model, why are there two of the same model at the same fixed state?

And what’s the difference between them?, assuming there is a difference.

No difference between these 2. Basically a design decision to have different variants of the modes (the ones like 0613,1106) and one synonym linking to the most recent (with some delay)

1 Like

As this topic has a selected solution, closing topic.


Temporarily reopened at the request of a user.

Unfortunately, that’s not a true characterization, as OpenAI has treated the chat models differently as time went on.

Upon introduction of the chat endpoint, there was a gpt-3.5-turbo model, and a gpt-3.5-turbo-0301. The latter with the date was to be a snapshot, while the gpt-3.5-turbo indeed get constant changes, aggressively so through April and May.

In June a new model was announced, and a new scheme where gpt-3.5-turbo would be an alias pointed to the currently-recommended model. With the creation of gpt-3.5-turbo-0613, gpt-3.5-turbo was then (and is still is) pointed to that model two weeks later, with the existing “real” model gpt-3.5-turbo, that had accumulated changed, being turned off and no longer accessible.

gpt-3.5-turbo-0613 however is NOT a snapshot. It has continued to accumulate undocumented changes to its training and performance, oftentimes breaking applications. There was no alternative for those seeking reliability with the capability of calling functions that was introduced with this model.

gpt-3.5-turbo-1106 is the “preview/beta” model introduced at devday, with 16k input context length at no greater cost. It has been trained on also using parallel tools. However, it seems, due to issues, it is no longer on the path to being a gpt-3.5-turbo replacement in its current form (which was slated for December 11), with now the requirement for a new gpt-3.5-turbo version under development and due next year.

So currently: gpt-3.5-turbo → gpt-3.5-turbo-0613 → continued changes since introduction.

GPT-4 has followed a similar path. gpt-3.5-turbo-0301 also proved not immune to OpenAI tampering.

The API now offers a system_fingerprint that is supposed to report back these previously undocumented alterations that affect performance and determinism.

2 Likes

Oooh. Thanks for the update on this @_j. Like GPT, I feel like even i have to update my knowledge base. Cheers !

@minaabdelmassih57

Can this now be closed?

@EricGT Yeah, I got my answer