I am trying to use the endpoint, “v1/completions” for “gpt-3.5-turbo-instruct” model and I keep getting the 404 error. Here’s my code,
Seems you call openai api in frontend. It is not recommended.
Remember that your API key is a secret! Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your own backend server where your API key can be securely loaded from an environment variable or key management service.
Please use node.js module for openai like the following:
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const completion = await openai.chat.completions.create({
messages: [{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}],
model: "gpt-3.5-turbo",
});
console.log(completion.choices[0]);
}
main();
It is more simple and clear.
Not for the model that was specified, which operates on completions.
gpt-3.5-turbo-instruct also is not going to stop writing with a good many prompts such as “testing”. You are going to get words that would follow that, such as “is a good way to ensure quality…” for max tokens. You should add some separator showing the AI where it should begin its own answer if that is the purpose.
I don’t see the 404, but also don’t have the patience for zooming around in screenshots.