GPT-4 API behaves likes it's GPT-3

calling the chat/conversation API endpoint with model = gpt-4, and the prompt “who are you and which version?” it replies with “I am an AI Assistant powered by OpenAI, and I am based on the GPT-3 model.”, if I display the reply from the API server, I get “model”:“gpt-4-0314”.
Is that logic?


GPT-4 doesn’t have knowledge that it’s GPT-4. You can try giving it a task that earlier models couldn’t do and check if it’s able to answer, instead.

it might not have that knowledge. but when i specify either gpt-4 or gpt-4-0314 as the chosen model and ask it to list the models available, it clearly states gpt-3 is the latest. this is an issue.


you’re quite right… querying the endpoint completely slipped my mind. for anyone else concerned, run this command on your terminal or command line:

curl \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
     "model": "gpt-4",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7

…then watch as the response (as of the time of this post) should reflect the model listed below:

{"id":"chatcmpl-idnumber","object":"chat.completion","created":date,"model":"gpt-4-0314","usage":{"prompt_tokens":13,"completion_tokens":5,"total_tokens":18},"choices":[{"message":{"role":"assistant","content":"This is a test!"},"finish_reason":"stop","index":0}]}

I joined the waitlist soon after GPT-4 was announced and I just got off, so it should be 2 weeks for an optimistic take.

9 days for me. joined on the 18th. email came through on the 27th. i also believed paying was the way. it’s not. that only bought me access to the api. for those initial ~8 days, my api calls were being answered by 3.5-turbo.

once you’re off the waitlist, change your model name to gpt-4, and it too should return results using the 0314 snapshot.

One thing to call out if you’re unsure if it’s actually GPT-4, is to check your API usage requests which will tell you which model you’re using.

Also worth noting that chat GPT-4 API does not “remember” your responses like when using it in the browser. To have a conversation with it, you need to constantly update and provide the previous conversation to it for context.

You achieve this by saving the previous response and adding it to the messages array every time you make a new API request / chat message. You can find a JavaScript example request here.

Smart to go to the usage page, and yes indeed I’m using model gpt4-0314. Thanks.

Indeed GPT4 believes he is GPT-3 … but his answers are much more complete than GPT3.5-turbo, so a priori it is gpt4 :wink:

But if you send the same question in browser you’ll get the right answer.
For example I asked “are you using gpt-4?”,

on web I got -

 "Yes, I am based on the GPT-4 architecture, which is a successor to the GPT-3 model developed by OpenAI. As a language model, I am designed to understand and generate human-like text based on the input I receive. My knowledge is up-to-date as of September 2021."

while throught API or in Playground I got this, even tho model’s already set to “gpt-4” :

As of now, I am based on OpenAI‘s GPT-3 model. GPT-4 has not been released yet. If and when GPT-4 becomes available, and if I am updated to use it, my capabilities will be enhanced accordingly.
1 Like

This is what I’m seeing on web… so… I’m paying for GPT3??

The AI doesn’t know what it is, except for a system message given to it at the start of a chat session in ChatGPT (which is only the web interface) or by the API system message you provide it.

It will then even contradict that message when it is either not reliably passed by the conversation management or when it can find no evidence of GPT-4 existing in its knowledge corpus.

Edit and resubmit the first message of this chat share: Repeat Requested Messages.

I suspect that backend conversation summarization in ChatGPT is even messing with what the chatbot knows about itself per turn when it is not topical.

You can ask a logic question that only GPT4 can answer, but of course every instance of this exact question daily in multiple forums is simply user error in the mere asking. What version of human are you?

Well, the base training data seemed to be the same, and the model doesn’t know current info (dates, etc). You might wanna give the model some contextual information before your actual prompt. Like:

Today's date: ....
Current time: xx:xx
User name: xxxx

...prompt here...