I got the GPT-4 API, but I the model version is still the snapshot version of GPT-4

Hi,

I got the GPT-4 and ChatGPT APIs in a few weeks ago.

But, I still obtain the generation results from the snapshot version (“gpt-3.5-turbo-0301”, “gpt-4-0314”) of them when I call the API.

"model": "gpt-4-0314",
"usage": {
	"prompt_tokens": 389,
        "completion_tokens": 61,
	"total_tokens": 450
},

How should I call the “gpt-3.5-turbo” and “gpt-4”?

Thanks

What’s your code for calling the API?

You should specify model: "gpt-4"

1 Like

Yep, I call the model “gpt-4” correctly. I put the model name (“gpt-4”) in the below code.

response = openai.ChatCompletion.create(
                model=model_name,
                messages=messages,
                max_tokens=max_len,
                temperature=temp,
                n=n,
                stop=stop,
                presence_penalty=pres_penalty,
                frequency_penalty=freq_penalty,
                top_p=top_p,
            )

Same issue. Model name used in api calls is gpt-4, but the responses come back with gpt-4-0314

I think if you just put ‘gpt-4’ it will use the latest model. Which will be represented in the response. You can specifically request to use older models if needed (for a period of time). .

@xinort so you mean that even if we just put the “gpt-4”, we get the responses from “gpt-4-0314” because the latest version is “gpt-4-0314” now. Do I understand your reply well?

1 Like

Yes, I’m no expert but that is what I’ve noticed.

2 Likes

I have this same issue, however openai has pushed updates to gpt-4 since 0314, at least according to their own service status updates, so the 0314 model is most likely not the latest version of gpt-4.

Is there anyone else who had this issue but has managed to resolve it, and if so how did you do it?