Asking GPT-4 API what model is returns gpt-3

I am calling the GPT4 API. However when asking what model is based the response on, it returns that is GPT-3 based. Shouldnt the expected response be GPT-4 based? Does this mean that my model is not GPT-4?

messages_history = [{"role": "user", "content": "Which model of openai are you? GPT4 or GPT3?"}]
response = openai.ChatCompletion.create(
            model='gpt-4',
            messages=messages_history
            )

print(response['choices'][0]['message']['content'])

‘I am an AI developed by OpenAI, based on the GPT-3 model.’

1 Like

Your function call is correct and you will be using the gpt-4 model regardless of what the model responds back as. If you are still unsure, you can go the your OpenAI account and under usage, see what model is being pinged along with the amount of tokens spent.

This is a very common response, half the time it will respond with 4 and others 3.

1 Like

Ok, it’s strange but I will believe so. I thought that gpt-4 will have more consistent answer tho.
Thanks :slight_smile:

Gpt4 doesn’t know gpt4 exists… I had that conversation with it lol…

1 Like

Same with me :smiling_face_with_tear:

Don’t worry, in an API playground preset, I show how you can construct system programming to let the AI inform users if they ask similar questions. https://platform.openai.com/playground/p/oq2b7S0qQIbeBaMiw3rlWFvj?model=gpt-4