Confusing response regarding what model used

Hi,

Recently I have been trying to make an app using the GPT-4 model - to ensure I got it correctly working, I asked what model this is using. It responded with GPT-3. When I asked the same question in the openai chatgpt portal - it did know it was using the gp4 model and responded as such.

{
“id”: “#####”,
“object”: “chat.completion”,
“created”: 1696970778,
“model”: “gpt-4-0613”,
“choices”: [
{
“index”: 0,
“message”: {
“role”: “assistant”,
“content”: “This is using OpenAI’s GPT-3 model.”
},
“finish_reason”: “stop”
}
],
“usage”: {
“prompt_tokens”: 23,
“completion_tokens”: 12,
“total_tokens”: 35
}
}

Using the openai web portal to interact with gpt4:

Prompt: what model are you using?
Response: I am based on the GPT-4 architecture developed by OpenAI. How can I assist you further?

Does anyone know why this is? Is it for sure using gpt4?

The GPT models are trained on data that ends prior to the models creation, the only way the models know what version they are is by system prompts that tell them this fact. GPT AI’s are not self learning or auto updating, they use a fixed dataset that occasionally gets updates, any attempt to ask a model what version it is will typically be unreliable.

Welcome to the dev community forum!

If you search the forums, you’ll find a wealth of information regarding this…

Here’s one…

1 Like

Your answer is above.

When using the API, you use system message programming to tell the AI what it is and how you’d want it to respond, including the name of the AI model it reports.

1 Like

The data returned from the API call includes a field listing the model.

“model”: “gpt-4-0613”,

So yes, that is GPT-4.