Getting a 404 error when making a createChatCompletion call with gpt-4

I have a node server where I’m making API calls to ChatGPT to interact with users on my mobile app. So far I’ve just used gpt-3.5-turbo, but I wanted to move to gpt-4, so I simply replaced the model in the createChatCompletion call to gpt-4 but I get a 404 error response. I’ve tried using specific models and the generic gpt-4 model name, but same thing. I’m sorry if this is an obvious one but I can’t seem to find anything that tells me what could be wrong. My organization is “personal” and I used the same API key as well as generating a new one.
Here is the call I make:

var completion = await openai.createChatCompletion({model: “gpt-4”, message: messages, max_tokens: 100})

Where messages is the expected array of information I want ChatGPT to process. Any ideas would be appreciated!

You probably need to update your OpenAI node module since it seems you are still using v3. All documentations are already in v4 so it might help you wade through the examples and might fix your error.

v3

const completion = await openai.createChatCompletion({
            messages,
            model,
            max_tokens,
            temperature,
        })

v4

const completion = await openai.chat.completions.create({
            messages,
            model,
            max_tokens,
            temperature,
        })

See Reference page

1 Like

I really appreciate you responding and that looked hopeful but I’m not sure that’s the answer. The documentation talks about streaming - using the call you suggested for a different use case. When I try the above call I get a null response. All the examples and tutorials I’ve looked at say that I can change gpt-3.5-turbo to gpt-4 and it should work…but alas it doesn’t for me. My account is a paid account but I wonder if there’s a difference between having a paid account that allows you to access 4 through the web and app versus the API?

@PaulBellow I see your name quite a bit in this forum and hoping to get help on this. I’ve been stuck on the older model and can’t seem to move to gpt-4. The documentation says I should be able to make the same function call with the same keys but just change the model to gpt-4, but I get the 404. I have openai 3.2.1 installed on my node server. Any suggestions would be appreciated!

There’s a difference between Chat Completion and Completion (Legacy)… what older model were you using?

I was using gpt-3.5-turbo. Here is the code:

  { role: 'system', content: ai_context},
  { role: 'user', content: answer },
	];
  var completion = null;
 try {
	  completion = await openai.createChatCompletion({
	    model: "gpt-3.5-turbo",
	    messages: messages,
	    max_tokens: 100,
	  });
	} catch(error) {
		error_flag = true;
		console.log("Failed in createChatCompletion call " + error);

Am I making the wrong call? Again - I appreciate you jumping in on this.

What is the exact error you’re getting? Does it work with the older model? What are you replacing gpt-3.5-turbo with?

https://platform.openai.com/docs/models/model-endpoint-compatibility

I simply changed the model to gpt-4 and get:

type or {"message":"Request failed with status code 404","name":"Error","stack":"Error: Request failed with status code 404\n    at createError (/var/www/server/node_modules/openai/node_modules/axios/lib/core/createError.js:16:15)\n    at settle (/var/www/server/node_modules/openai/node_modules/axios/lib/core/settle.js:17:12)\n    at IncomingMessage.handleStreamEnd (/var/www/server/node_modules/openai/node_modules/axios/lib/adapters/http.js:322:11)\n    at IncomingMessage.emit (node:events:539:35)\n    at IncomingMessage.emit (node:domain:475:12)\n    at endReadableNT (node:internal/streams/readable:1345:12)\n    at processTicksAndRejections (node:internal/process/task_queues:83:21)","config":{"transitional":{"silentJSONParsing":true,"forcedJSONParsing":true,"clarifyTimeoutError":false},"transformRequest":[null],"transformResponse":[null],"timeout":0,"xsrfCookieName":"XSRF-TOKEN","xsrfHeaderName":"X-XSRF-TOKEN","maxContentLength":-1,"maxBodyLength":-1,"headers":{"Accept":"application/json, text/plain, */*","Content-Type":"application/json","User-Agent":"OpenAI/NodeJS/3.2.1","Authorization":"Bearer ■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■hTU9shGu","Content-Length":1572},"method":"post","data":"{\"model\":\"gpt-4\",\"messages\": paste code here

All your libraries updated?

I’ll wait for someone else to tap-in, as I don’t have a ton of experience with functions yet.

404: model not found in your account.

If gpt-3.5-turbo works, and gpt-4 gives you a 404…

  • Are you using a free trial?
  • Have you never added a credit card to the API billing system
  • And then have you not yet made any payments to OpenAI?

The last one is a requirement to unlock access to GPT-4 models: a prior payment to OpenAI (at least $1).

You can purchase a prepay credit of $5 and then after a bit of payment processing time (and maybe a new API key generation), gpt-4 should work for you.

1 Like

Thank you for your reply!
I upgraded to a paid account months back and use the web UI and app regularly - using Chat GPT 4 and I have paid for that access regularly so there’s a credit card on my account. Is there something separate for the API access? I haven’t done anything separate to setup access for the API. I just generated the API keys from my upgraded account so thought that would cover it.

OK - I’m super confused…I just went to my account and there is a credit card on file and it says I’ve had 6 invoices - but all $0!!! But I can access Chat GPT 4 so shouldn’t I be paying for that? I just assumed I was getting charged the $12.99 or whatever it is…or again, is that separate for the API access? So I guess the solution is to buy credits and that will work?

Yes, ChatGPT and the API are separate.

Here’s a great quickstart guide

1 Like

Yes, you pay for your use of the language data being sent to and from API models, with different rates for the type and class of model.

I could send you to the official pricing page, but one I created is easier to understand. You get billed tiny amounts for each token, but I represented prices in million tokens for clarity:

Well…crap - I had a feeling but I couldn’t find anything to tell me this specifically. Sorry to spin your wheels on this and appreciate the help. I’ll follow the guide and get my account updated so I can start using the updated model.
Thanks again!!