404 Not Found: GPT-4 model with 8K context approved, but can't use

I recently got the email saying I’m invited to use the GPT-4 models with 8K context API.

However, when I run my code with gpt-4 as the model, it throws a 404 error. Using the same code for gpt-3.5-turbo works.

Here’s a snippet of the code I’m using:

const config = {
    method: "post",
    url: "https://api.openai.com/v1/chat/completions",
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
    },
    data: {
      model: "gpt-4",
      messages,
      max_tokens: 250,
      n: 1,
      temperature: 0.7,
      frequency_penalty: 0,
      presence_penalty: 0,
    },
  };

  try {
    const response = await axios(config);
    const completion = response.data.choices[0].message.content;
    return completion;
  } catch (error) {
    console.error("Error: ", error);
    return "Sorry, I could not generate a response.";
  }

Here’s the error:

response: {
    status: 404,
    statusText: 'Not Found',
    headers: AxiosHeaders {
      date: 'Mon, 08 May 2023 01:29:05 GMT',
      'content-type': 'application/json; charset=utf-8',
      'transfer-encoding': 'chunked',
      connection: 'close',
      vary: 'Origin',
      'x-request-id': '0b6e57a26ca79e3ad5751a6fa7e2684c',
      'strict-transport-security': 'max-age=15724800; includeSubDomains',
      'cf-cache-status': 'DYNAMIC',
      server: 'cloudflare',
      'cf-ray': '7c3dee012e57334e-EWR',
      'alt-svc': 'h3=":443"; ma=86400, h3-29=":443"; ma=86400'
    },
    config: {
      transitional: [Object],
      adapter: [Array],
      transformRequest: [Array],
      transformResponse: [Array],
      timeout: 0,
      xsrfCookieName: 'XSRF-TOKEN',
      xsrfHeaderName: 'X-XSRF-TOKEN',
      maxContentLength: -1,
      maxBodyLength: -1,
      env: [Object],
      validateStatus: [Function: validateStatus],
      headers: [AxiosHeaders],
      method: 'post',
      url: 'https://api.openai.com/v1/chat/completions',
      data: `{"model":"gpt-4-0314","messages":[{"role":"system","content"...

Do you see the GPT-4 model in the playground?

I have the same problem, got email confirmation for access on May 3rd

@fahmid I also encountered the same problem, the error message is the same …has your problem been solved?

Checking if it works using for example Python and sending just something like completion = openai.ChatCompletion.create(
model=model_chat,
messages=[your_test_message],
temperature=0,
)
may at least show if problem is on the initial request setup

i have also faced same problem i used this unction

def generate_chat_completion(messages, model="gpt-4", temperature=1, max_tokens=None):
    headers = {
        "Content-Type": "application/json",
        "Authorization": f"Bearer {API_KEY}",
    }

    data = {
        "model": model,
        "messages": messages,
        "temperature": temperature,
    }

    if max_tokens is not None:
        data["max_tokens"] = max_tokens

    response = requests.post(API_ENDPOINT, headers=headers, data=json.dumps(data))

    if response.status_code == 200:
        return response.json()["choices"][0]["message"]["content"]
    else:
        raise Exception(f"Error {response.status_code}: {response.text}")

and got this response for gpt-4:

Exception: Error 401: {
    "error": {
        "message": "Incorrect API key provided: sk-eyS37***************************************zUsf. You can find your API key at https://platform.openai.com/account/api-keys.",
        "type": "invalid_request_error",
        "param": null,
        "code": "invalid_api_key"
    }
}

when i change model to chagpt i got this:

Exception: Error 401: {
    "error": {
        "message": "Incorrect API key provided: sk-eyS37***************************************zUsf. You can find your API key at https://platform.openai.com/account/api-keys.",
        "type": "invalid_request_error",
        "param": null,
        "code": "invalid_api_key"
    }
}

when is use openai key that has not granted gpt-4 i got this for gpt-4 model:

Exception: Error 404: {
    "error": {
        "message": "The model: `gpt-4` does not exist",
        "type": "invalid_request_error",
        "param": null,
        "code": "model_not_found"
    }
}

But the first key which I have granted access to gpt-4 but I got the error you can check the error

Hi,

Have you tried generating new key on the account that have been granted access and using that?

I got this key from a client to use in application but i think gpt-4 api key is accessable to all those clients who has attached payment.

because now the waitlist form is not available.

i read this about gpt-4 api key

https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4

Ok, so a few things.

GPT-4 API will be granted to all developers by the end of the month.

You say you have been given an API key by a client to include in their application, just to check, you are not actually including the API in the application, correct? You are storing it with a key management service or on one of their server machines in the local environment variables, yes?

I use from variable but now I pass as an argument ( key ) to function.

I have also a free trail API key that works fine for gpt-3.5-turbo but not with gpt-4 and if use with gpt-4 model it says “gpt-4 does not exist”

def generate_chat_completion(messages, model="gpt-4", temperature=1, max_tokens=None, key=""):
    API_KEY = key
    headers = {
        "Content-Type": "application/json",
        "Authorization": f"Bearer {API_KEY}",
    }

    data = {
        "model": model,
        "messages": messages,
        "temperature": temperature,
    }

    if max_tokens is not None:
        data["max_tokens"] = max_tokens

    response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, data=json.dumps(data))

    if response.status_code == 200:
        return response.json()["choices"][0]["message"]["content"]
    else:
        raise Exception(f"Error {response.status_code}: {response.text}")

Call API;

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "First tell me you are gpt4 ? then Translate the following English text to French: 'Hello, how are you?'"}
]


response_text = generate_chat_completion(messages ,model="gpt-4",  key =your_api_key )
print(response_text)

But one thing I need to know that why client API key not working with chatgpt model and give the same invalid key error

Ok,

Firstly where is the key in API_KEY = key being loaded from? is it in the source code as text or in an environment variable being loaded from the local filesystem?

Secondly, how do you know the clients key has been granted GPT-4 API access?

Pretty obvious.
It is incorrect or has been rescinded by the owner - maybe because an employee was sharing it with everybody.

@ Foxabilo

My_openai_key = os.getenv(“MY_OPENAI_KEY”)
your_api_key = os.getenv(“ClIENT_KEY”)

I don’t know for now because now the waitlist form is not available .but if he applied for the waitlist then he must be got an email that you have granted gpt-4 access .

I think you are right, i just told my client that your API key does not even work with gpt-3.5-turbo
lets hope what he say

Ok, good, and yes, check with the client about the status of their key. Perhaps it would be prudent to ensure that the key is only accessible by you and as few people as possible and ensure they must respect the privacy of the keys.

If a key gets published to Github (for example), it will automatically become invalid. If OpenAI discovers API keys in source code it will get disabled

yes you are right but it did not work from start he also get from his partner .
got two key non of them work .
ok good bye
thanks for helping