Regarding the issue with the gpt-3.5 turbo API

The first question I want to ask is

  1. Why do I submit the following parameters when using the GPT3.5 API: {“model”: “gpt 3.5 turbo”, “messages”: [{“role”: “user”, “content”: “hi”}]}
    However, in the returned data, the model is gpt-3.5-turbo-0301?
    So am I using gpt-3.5-turbo or gpt-3.5-turbo-0301 at this time?
  2. Also, I would like to ask if the mode I am currently using will no longer be available until June 1st of this year? If it doesn’t work, can I only switch back to the GPT3 version??
    Does anyone know about the above questions? Can you provide me with a detailed explanation? Thank you.
1 Like
  1. gpt-3.5-turbo-0301 is like a snapshot of the model at a specific point in time. In essence, it will not be any different from the gpt-3.5-turbo model.

  2. You can switch anytime you want

Hello, can you explain it in more detail? For example, if there is no difference between gpt-3.5-turbo and gpt-3.5-turbo-0301, why should they be separated into two? Also, why did I submit a parameter of gpt-3.5-turbo and return gpt-3.5-turbo-0301?
Also, can the GPT-3.5 API still be used after June 1st??

I’m not too sure about the GPT-3.5 api after the June 1st thing, you’ll have to check the open ai press releases for that.

GPT 3.5-turbo is a model which they can make changes to sometimes, update the parameters or stuff like that. gpt-3.5-turbo-0301 is a fixed model which was created and has not be updated post that.

If this {“model”: “gpt 3.5 turbo” is your actual api call, then you have not specified the model correctly, which might have caused the answer to be returned from -0301. Try the same call but use model = “gpt-3.5-turbo” and see what the return type model is. Again, this is conjecture but the performance between the two should not be that dissimilar or different at all

1 Like

The parameters I submitted are:
{“model”: “gpt-3.5-turbo”,“messages”: [{“role”: “user”, “content”: “hi”}]}

Have you noticed a difference in performance between 3.5 turbo or 3.5 turbo-0301 ?

I don’t know if I am currently using gpt-3.5-turbo mode or gpt-3.5-turbo-0301 mode.
Because the parameters I submitted are in the gpt-3.5-turbo mode, but the mode in the returned data is gpt-3.5-turbo-0301

I was wondering the same thing as you. My guess would be that it’s returning 0301 because that’s the latest turbo mode now, and this would return gpt-3.5-turbo-0601 or something like that in the future, as long as you keep using gpt-3.5-turbo in the call. So it might just be information about what modell is used “at this point in time”, when calling gpt-3.5-turbo.

This is just me speculating though. And I agree that it should be documented more clearly somewhere (and maybe it is, but we just haven’t found it!)


I am actually getting different results from gpt-3.5-turbo and gpt-3.5-turbo-0301. Is there a way to enforce gpt-3.5-turbo (I get better results from that one)?

when you select gpt-3.5-turbo
That means you want the latest 3.5 model available, which is gpt-3.5-turbo-0301

Otherwise you have to directly specify the specific model you use like gpt-3.5-turbo-0301

gpt-3.5-turbo = latest version of the model
So you get gpt-3.5-turbo-0301 in response

I dont think there is any other 3.5 model than gpt-3.5-turbo-0301 currently

Continuous model upgrades

With the release of gpt-3.5-turbo, some of our models are now being continually updated. We also offer static model versions that developers can continue using for at least three months after an updated model has been introduced. With the new cadence of model updates, we are also giving people the ability to contribute evals to help us improve the model for different use cases. If you are interested, check out the OpenAI Evals repository.

The following models are the temporary snapshots, we will announce their deprecation dates once updated versions are available. If you want to use the latest model version, use the standard model names like gpt-4 or gpt-3.5-turbo.

|Model name|Deprecation date|

from: OpenAI API