Inaccurate response from GPT4 when asked the model being currently used

Hi, When I send in the following prompt “What model are you on?”, I am getting the response from the completions API as “I am an AI language model, specifically OpenAI’s GPT-3.”

model: ‘gpt-4’
temperature: 0.4
max_tokens: 2000

Below is the conversation
[{"role":"user","content":"What model are you on?"}, {"role":"assistant","content":"I am an AI language model, specifically OpenAI's GPT-3."}]

It should ideally not require mentioning in the system message that the model currently being used is GPT-4 right?

1 Like

Same response. I use gpt-4 via API and users report responses that it is the gpt-3 model. Any solution?