V3 API not working 404 error, v4 API not working too

Hello,
I have faced a little problem. The API stoped working
I had v3 and it used to work just fine until today

import { Configuration, OpenAIApi } from ‘openai’;
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY
});
const openai = new OpenAIApi(configuration);

async getAiText(promptStr) {

    const completion = await openai.completions.create({
      model: "text-davinci-003",
      prompt: promptStr,
      max_tokens: 855,
      frequency_penalty: 0,
      presence_penalty: 0,
      temperature: 0.7,
      top_p: 1
    });
    return completion.data.choices[0].text; // this returns 404 error

},

so I installed the v4 as it said in the docs but it does not work)

import OpenAI from “openai”;
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});

async getAiText(promptStr) {

      const response = await openai.completions.create({
        model: "davinci-002",
        prompt: promptStr,
        temperature: 1,
        max_tokens: 256,
        top_p: 1,
        frequency_penalty: 0,
        presence_penalty: 0,
      });
    return response.choices[0].text;

},

text-davinci-003 Does not exist anymore

davinci-002 should work (never tried it) but really isn’t recommended.

Try going to the playground and get the hang of things and the models,

then go to the API reference and start with their example verbatim:

import OpenAI from "openai";

const openai = new OpenAI();

async function main() {
  const completion = await openai.chat.completions.create({
    messages: [{ role: "system", content: "You are a helpful assistant." }],
    model: "gpt-3.5-turbo",
  });

  console.log(completion.choices[0]);
}

main();

Completion models will probably cease to exist as well soon (as they are much easier to output their training data/copyright material)

1 Like

Thank you RonaldGRuckus
so I cannot use v3 any more too?

If you’re referring to the NodeJS Version, it’s highly recommended to update to the latest version (v4.26.1)

The API and the structures are changing constantly. Using older version I’m pretty sure will lead to all sorts of weird errors & bugs.

Appreciate your help
Hope I will manage it)

still not working)

import dotenv from 'dotenv';
//const cotConfiq = dotenv.config();
import _text from './text.js';

import OpenAI from "openai";

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

const gpt = {

  data:{},
  methods:{

    async getAiText(promptStr) {

        const response = await openai.chat.completions.create({
          model: "gpt-4",
          messages: [
            {
              "role": "user",
              "content": promptStr
            }
          ],
          temperature: 1,
          max_tokens: 256,
          top_p: 1,
          frequency_penalty: 0,
          presence_penalty: 0,
        });
        return response.choices[0];
    }, ....

where did I go wrong? can you help?

There’s two type of completion now.

Completion (Legacy / Instruct)

and

Chat Completion.

You can find out which models support which endpoints here.

If you were using text-davinci-003 and just want to drop in a new model without changing your code (hopefully), try gpt-3.5-turbo-instruct model.

They announced this in July 2023…

Deprecation of older models in the Completions API

As part of our increased investment in the Chat Completions API and our efforts to optimize our compute capacity, in 6 months we will be retiring some of our older models using the Completions API. While this API will remain accessible, we will label it as “legacy” in our developer documentation starting today. We plan for future model and product improvements to focus on the Chat Completions API, and do not have plans to publicly release new models using the Completions API.

Starting January 4, 2024, older completion models will no longer be available, and will be replaced with the following models:

Keeping up with the OpenAI blog is a good idea if you can.

1 Like

Thank you. will try to make it done. no good so far

Is this code you just copy / pasted or something you built?

Your old code using text-davinci-003 should work with the gpt-3.5-turbo-instruct model… You do have to keep all the libraries updated if you go that route, though.

i returned to v3
but get 400 err now

    async getAiText(promptStr) {
        const completion = await openai.createChatCompletion({
          model: "gpt-3.5-turbo",
          prompt: promptStr,
          max_tokens: 855,
          frequency_penalty: 0,
          presence_penalty: 0,
          temperature: 0.7,
          top_p: 1
        });
        return completion.data.choices[0].text;
    },

“message”: “Request failed with status code 400”

i have also tried model: “gpt-3.5-turbo-instruct”

Did you look at the link for model endpoint compatibility?

Might also take a look at the quickstart guide.

Or if you can list all libraries and versions you’re using, and the code, we might be able to help more.

I did,
according to my endpoint : /v1/chat/completions
i have to use:
gpt-4 and dated model releases, gpt-4-turbo-preview and dated model releases, gpt-4-vision-preview, gpt-4-32k and dated model releases, gpt-3.5-turbo and dated model releases, gpt-3.5-turbo-16k and dated model releases, fine-tuned versions of gpt-3.5-turbo

right? i have tried them but …

What specific error are you getting when you hit the API?

  "url": "api.openai.com/v1/chat/completions"
    },
    "status": 400

The HTTP status code 400 is used to indicate a “Bad Request.” This status code is a response from a web server indicating that the server could not understand the request due to invalid syntax.

When a server returns a 400 status code, it’s essentially telling the client (e.g., your web browser or an application making the request) that the request it sent is incorrect or corrupted and cannot be processed by the server. This can happen for several reasons, such as:

The request is malformed, meaning the syntax of the request is incorrect.
The request contains invalid parameters or arguments.
The request is missing required information, such as headers or body content needed for the server to process the request.

To resolve a 400 Bad Request error, one needs to check the request being made for any incorrect or missing information and correct it before sending the request again. This might involve reviewing the API documentation if you are interacting with an API, ensuring that the URL is correct, or checking that the data you are sending in the request body is properly formatted.

My gut says it’s likely your messages object and/or library problem.

looks like I have done it
wired but tried this way before.
thank you so much my friend for time and help

1 Like