How do I check whether I am using GPT-4?

Hey guys,

I’m currently trying to use GPT-4 using langchain, however when I define the model as gpt-4 in the ChatModel creation in such a way below:
llm = ChatOpenAI(temperature=0.5, model_name=‘gpt-4-0613’, openai_api_key=OPENAI_API_KEY, model=“gpt-4-0613”)

When I query this llm as to what chat model / AI model it is running on, its answer is:
AIMessage(content=‘I am based on the GPT-3 language model developed by OpenAI.’)

For context, I have added the 50$± to date and should technically have access to GPT4.

I’ve used also the OpenAI library directly, and the API response describes the model requested / responded as GPT-4 correctly, however the AI response message still describes itself as GPT-3. Is this normal?


CODE:
import os
import openai
openai.api_key = os.getenv(“OPENAI_API_KEY”)

completion = openai.ChatCompletion.create(
model=“gpt-4-0613”,
messages=[
{“role”: “system”, “content”: “You are a helpful AI assistant”},
{“role”: “user”, “content”: “What chat model are you running on?”}
]
)
print(completion)


AI RESPONSE:
{
“role”: “assistant”,
“content”: “As an AI developed by OpenAI, I’m running on the GPT-3 model.”
}
{
“id”: “chatcmpl-8EFc4LD0H6hHq7IMs2R7tCBy5xBHi”,
“object”: “chat.completion”,
“created”: 1698407300,
“model”: “gpt-4-0613”,
“choices”: [
{
“index”: 0,
“message”: {
“role”: “assistant”,
“content”: “As an AI developed by OpenAI, I’m running on the GPT-3 model.”
},
“finish_reason”: “stop”
}
],
“usage”: {
“prompt_tokens”: 25,
“completion_tokens”: 19,
“total_tokens”: 44
}
}

You are asking a model with a knowledge cut off date of around Jan of 2022 about something that was not created yet. There is no way for it to know that it is GPT-4 when the information it is trained on only contains information about GPT-3.

If you have specified GPT-4 as the model name and you get no errors then you are using GPT-4.

1 Like

This makes so much sense, thank you Foxabilo!

1 Like