Hi,
Not sure why I’m getting the error {"message":"Request failed with status code 401","name":"Error","stack":"Error: Request failed with status code 401\n...
I have .env file with the API key: OPENAI_API_KEY="sk-fU....."
my server.js calls: (I tried including the org key as well)
const configuration = new Configuration({
organization: "org-xxxxxxxxxxxxx",
apiKey: process.env.OPENAI_API_KEY,
});
clear the cache and tried with 3 different browsers on Mac Ventura (intel).
I have double and triple checked my API key but can’t figure this one out.
Any pointers?
you can add this in the script file in which you’re calling API
const response = await fetch(’ http://localhost:5000’, {
method: ‘POST’,
headers: {
‘Content-Type’: ‘application/json’,
‘Authorization’: Bearer ${process.env.OPENAI_SECRET_KEY}
},
body: JSON.stringify({
prompt: data.get(‘prompt’)
})
})
make sure to check what URL you’re hitting, you’ll be using local server so request local server to get response
Hi @humayounshah, I’m using expo managed and my code has been working until today. Last tested on the 2nd of January and all was well. Note I’m not using nodejs. The api call is happening on the client’s side.
const configuration = new Configuration({
apiKey: “sk-_________________”
});
const openai = new OpenAIApi(configuration);
const completion = await openai.createCompletion({
model: “text-davinci-003”,
prompt: ${prompt},
temperature: 0.7, // Higher values means the model will take more risks.
max_tokens: 256, // The maximum number of tokens to generate in the completion. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
top_p: 1, // alternative to sampling with temperature, called nucleus sampling
frequency_penalty: 0, // Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
presence_penalty: 0, // Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
});
give your values like this in openai.createCompletion
The ‘model’ parameter in the request payload is required for the API to properly route the request to the correct model - so the param would change to include whichever model you plan on using
Yes - the most common reason for this is that the API key is not correct or is not being set appropriately. To see if this is the issue test this by assigning you API key directly (without using any loading package) t the API key variable. If it works, your issue is with how you are loading the key.
Also, be sure to include the model name (like I show in my previous response).
This commonly occurs when you are using code generated by ChatGPT - try using code generated through the Playground its more current or better yet the doc has great examples of the most current code required.