Nodejs returns 401 error (running locally) [Solved]

Hi,
Not sure why I’m getting the error
{"message":"Request failed with status code 401","name":"Error","stack":"Error: Request failed with status code 401\n...
I have .env file with the API key:
OPENAI_API_KEY="sk-fU....."
my server.js calls: (I tried including the org key as well)

const configuration = new Configuration({
  organization: "org-xxxxxxxxxxxxx",
  apiKey: process.env.OPENAI_API_KEY,
});

clear the cache and tried with 3 different browsers on Mac Ventura (intel).
I have double and triple checked my API key but can’t figure this one out.
Any pointers?

Welcome to the community!

A 401 errors means it couldn’t authenticate. You need to send the API key in the header as a bearer…

This page might be helpful? They have a working example for nodejs…

Hope this helps.

ETA:

  const headers = {
    'Authorization': `Bearer ${process.env.OPENAI_SECRET_KEY}`,
  };

Thank you @PaulBellow!
it works now :smiley:

1 Like

you can add this in the script file in which you’re calling API
const response = await fetch(’ http://localhost:5000’, {
method: ‘POST’,
headers: {
‘Content-Type’: ‘application/json’,
‘Authorization’: Bearer ${process.env.OPENAI_SECRET_KEY}
},
body: JSON.stringify({
prompt: data.get(‘prompt’)
})
})

make sure to check what URL you’re hitting, you’ll be using local server so request local server to get response

Hi @humayounshah, I’m using expo managed and my code has been working until today. Last tested on the 2nd of January and all was well. Note I’m not using nodejs. The api call is happening on the client’s side.

const configuration = new Configuration({
apiKey: “sk-_________________”
});
const openai = new OpenAIApi(configuration);

const generateText = async (prompt) => {
try {
const completion = await openai.createCompletion({
model: ‘text-davinci-003’, prompt,temperature: 1, max_tokens: 4048,
});
setIsLoading(false)
setAIResults(completion.data.choices[0].text)
} catch (error) {
console.log(error)
setIsLoading(false)
}
}

const initializePrompt = useCallback(() => {
    const prompt = `Generate a ${type} based on ${summary}`
    setIsLoading(true)
    generateText(prompt)
})

Please help. Everything was okay till now

I have tried to replace the way u mentioned, but the same error is coming up.

const completion = await openai.createCompletion({
model: “text-davinci-003”,
prompt: ${prompt},
temperature: 0.7, // Higher values means the model will take more risks.
max_tokens: 256, // The maximum number of tokens to generate in the completion. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
top_p: 1, // alternative to sampling with temperature, called nucleus sampling
frequency_penalty: 0, // Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
presence_penalty: 0, // Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
});

give your values like this in openai.createCompletion

show me something that you’re implementing so that I can help you accordingly

1 Like

I’m getting this error can you please check how can i resolve it. I have deployed the server and before and after also I’m getting this error.

same problem here anyone help!!

facing the same problem any fix ?

A few updates to this - if you’re going to rely on this example please note these two issues -

  1. you will need to use the older version of got (install using ‘npm install got@11.8.3’) since the newer version of got is an ES Module and you cannot “require” it anymore…(see Error [ERR_REQUIRE_ESM]: require() of ES Module not supported | bobbyhadz )
  2. The ‘model’ parameter in the request payload is required for the API to properly route the request to the correct model - so the param would change to include whichever model you plan on using
const params = {
    "model": 'text-davinci-003',
    "prompt": prompt,
    "max_tokens": 160,
    "temperature": 0.7,
    "frequency_penalty": 0.5
  };

hey i am getting same 401 error, did you solved?

Yes - the most common reason for this is that the API key is not correct or is not being set appropriately. To see if this is the issue test this by assigning you API key directly (without using any loading package) t the API key variable. If it works, your issue is with how you are loading the key.
Also, be sure to include the model name (like I show in my previous response).

This commonly occurs when you are using code generated by ChatGPT - try using code generated through the Playground its more current or better yet the doc has great examples of the most current code required.

BTW another common issue is that the completion URL is being set wrong - the correct one is https://api.openai.com/v1/completions

1 Like

Hey all! You might want to check out the new error codes guide which gives suggestions on how to mitigate pretty much all error codes: OpenAI API

1 Like