Chat completions API can't answer its model correctly

Doesn’t matter which model I use, it always answer gpt3.5.
However in my chatgpt plus I do get answer correctly.
is there a setting or parameter i missed?

{
    "model": "gpt-4",
    "messages": [{"role":"system", "content": "You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown."},{"role": "user", "content": "what is the version of model you running?"}]
}

{
    "id": "chatcmpl-7HtDo4GzaQQPnZYubYYP4jtF0v00h",
    "object": "chat.completion",
    "created": 1684498204,
    "model": "gpt-4-0314",
    "usage": {
        "prompt_tokens": 46,
        "completion_tokens": 33,
        "total_tokens": 79
    },
    "choices": [
        {
            "message": {
                "role": "assistant",
                "content": "I am running on **OpenAI's ChatGPT**, which is based on the gpt-3.5-turbo version of the model."
            },
            "finish_reason": "stop",
            "index": 0
        }
    ]
}```

If you’re wondering whether OpenAI models have knowledge of current events, the answer is that it depends on the specific model. The table below breaks down the different models and their respective training data ranges.

Model name TRAINING DATA
text-davinci-003 Up to Jun 2021
text-davinci-002 Up to Jun 2021
text-curie-001 Up to Oct 2019
text-babbage-001 Up to Oct 2019
text-ada-001 Up to Oct 2019
code-davinci-002 Up to Jun 2021
Embeddings models (e.g.

text-similarity-ada-001)|up to August 2020|

Thanks for the info.
but I’m just curious why chatgpt plus can answer this correctly. do they build the knowledge to the system prompt ?

Very likely they have some data in system prompt (like current date, possibly model and other things). There’s several other threads discussing that GPT4 doesn’t consistently know it is GPT4, even though API response indicates it is. If that’s an issue for you (not sure why it would be), put it in system prompt.