Received GPT-4 API access but model and max token limit still indicate its GPT-3

Hello all,

I recently received an invite to use the GPT-4 model with 8K context. I can access the gpt-4 model in playground. However, when asked what model it is? It says its GPT-3. When asked what its max token limit is? It says 4096. Please see attached playground screenshot.

I tried creating an API Key and tested it using the chat completion API with the same results.

completion = openai.ChatCompletion.create(
  model="gpt-4",
  messages=[
    {"role": "user", "content": "Hello, what is your max token limit?"}
  ]
)
{
  "content": "My maximum token limit is 4096 tokens.",
  "role": "assistant"
}

I only have one organization under my account and it matches the one in the invite email.

I also made a curl request to the chat completion api with my org id (same as the one in the invite email) and API key and it looks like I still don’t have access to it. But it seems like I am billed at GPT-4 rates.

      "id": "gpt-4",
      "object": "model",
      "created": 1678604602,
      "owned_by": "openai",
      "permission": [
        {
          "id": "modelperm-xxxxxxxxxxxxxxx",
          "object": "model_permission",
          "created": 1683299842,
          "allow_create_engine": false,
          "allow_sampling": false,
          "allow_logprobs": false,
          "allow_search_indices": false,
          "allow_view": false,
          "allow_fine_tuning": false,
          "organization": "*",
          "group": null,
          "is_blocking": false
        }
      ],
      "root": "gpt-4",
      "parent": null

Please also note that I have a ChatPlus subscription which correctly states that the model is based on the GPT-4 architecture.

Does anyone know why this could happen?Any help is greatly appreciated.

All these generative models have tendency to confabulate. It’s not a good idea to ask the model it’s context length.

On playground, the UI has the maximum length capped at 2048 however gpt-4 has a context length of 8192 tokens which you can test via API.

You definitely have access to gpt-4 model since your api was successful and returned a response.

And your retrieve model call shared by you shows that it has gpt-4

2 Likes