Request failed with status code 400

I finally solved it. The values (e.g. temperature) retrieved from environment variables were text, and changing them to numeric types fixed the problem. This was a bug on my side since I changed only this part yesterday. I’m ashamed to admit it, but I’ll leave a record of it for the sake of others to follow. (I’m writing this here because I’ve reached the maximum number of replies.)



Suddenly I got this error today, Jan 17, 2023…
Until yesterday, our code has been running correctly.

Error: Request failed with status code 400
      at createError (C:\Users\81906\Documents\slackbot-gpt3\node_modules\axios\lib\core\createError.js:16:15)
      at settle (C:\Users\81906\Documents\slackbot-gpt3\node_modules\axios\lib\core\settle.js:17:12)        
      at IncomingMessage.handleStreamEnd (C:\Users\81906\Documents\slackbot-gpt3\node_modules\axios\lib\adapters\http.js:322:11)
      at IncomingMessage.emit (node:events:525:35)
      at endReadableNT (node:internal/streams/readable:1359:12)
      at process.processTicksAndRejections (node:internal/process/task_queues:82:21)

The error occurred here.
Environment variables are of course obtained correctly.

  const response = await openai.createCompletion({
    model: process.env.OPENAI_MODEL, // text-davinci-003
    prompt: prompt,
    max_tokens: process.env.OPENAI_MAX_TOKENS, // 2048
    temperature: process.env.OPENAI_TEMPERATURE, // 0.9
    presence_penalty: process.env.OPENAI_PRESENCE_PENALTY, // 0.6
  })
1 Like

Hey! Can you share what API end point you are using?

2 Likes

I use completion api, and updated above too.

Are 100% of requests 404ing? Have you tried different prompts and such?

1 Like

Not 404, it was 400.
And 100%. Since yesterday, it worked but today it did not.

Have you double checked your account on the API site to make sure you have credits and such?

1 Like

Now I saw Billing page again, there are no problems.
Approved usage limit is $1,000.00, so sufficient.
Current usage is very small, Hard limit and Soft limit are OK.
And last payment was 3 Jan and its status was paid…

1 Like

Are you using an official SDK? Looked like Node.js based on the error messages

1 Like

Yes I use official node sdk provided by OpenAI…

1 Like

Got it, can you go in an double check the env vars, the API is live and up, I can see on my end it is working. Any chance the env vars have been modified in some way?

1 Like

How to know API is live and up?

variables are still OK. here

Try sending a simple CURL request like the following: https://beta.openai.com/docs/api-reference/completions

curl https://api.openai.com/v1/completions \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -d '{
  "model": "text-davinci-003",
  "prompt": "Say this is a test",
  "max_tokens": 7,
  "temperature": 0
}'
2 Likes

API is live and up.

1 Like

This is as much debugging as I can do on my end, the only other suggestion would be to try simpler prompts using the Node client and see if that works (or spin up a new sample node project and try the same request there). You can also reach out to our support team at help.openai.com but the queue time is long right now.

1 Like

Replaced the current prompt with a simple one like this and got the same error…

1 Like

Ahhh I don’t have any clue…
Could you see if my account is active?

Error code said more…

{
    config: {
      transitional: [Object],
      adapter: [Function: httpAdapter],
      transformRequest: [Array],
      transformResponse: [Array],
      timeout: 0,
      xsrfCookieName: 'XSRF-TOKEN',
      xsrfHeaderName: 'X-XSRF-TOKEN',
      maxContentLength: -1,
      maxBodyLength: -1,
      validateStatus: [Function: validateStatus],
      headers: [Object],
      method: 'post',
      data: '{"model":"text-davinci-003","prompt":"Say hello","max_tokens":"2048","temperature":"0.9","presence_penalty":"0.6"}',
      url: 'https://api.openai.com/v1/completions'
    },
    request: ClientRequest {
      _events: [Object: null prototype],
      _eventsCount: 7,
      _maxListeners: undefined,
      outputData: [],
      outputSize: 0,
      writable: true,
      destroyed: false,
      _last: true,
      chunkedEncoding: false,
      shouldKeepAlive: false,
      maxRequestsOnConnectionReached: false,
      _defaultKeepAlive: true,
      useChunkedEncodingByDefault: true,
      sendDate: false,
      _removedConnection: false,
      _removedContLen: false,
      _removedTE: false,
      strictContentLength: false,
      _contentLength: 114,
      _hasBody: true,
      _trailer: '',
      finished: true,
      _headerSent: true,
      _closed: false,
      socket: [TLSSocket],
      _header: 'POST /v1/completions HTTP/1.1\r\n' +
        'Accept: application/json, text/plain, */*\r\n' +
        'Content-Type: application/json\r\n' +
        'User-Agent: OpenAI/NodeJS/3.1.0\r\n' +
        'Authorization: Bearer sk-xxxxx\r\n' +
        'OpenAI-Organization: org-xxxxx\r\n' +
        'Content-Length: 114\r\n' +
        'Host: api.openai.com\r\n' +
        'Connection: close\r\n' +
        '\r\n',
      _keepAliveTimeout: 0,
      _onPendingData: [Function: nop],
      agent: [Agent],
      socketPath: undefined,
      method: 'POST',
      maxHeaderSize: undefined,
      insecureHTTPParser: undefined,
      path: '/v1/completions',
      _ended: true,
      res: [IncomingMessage],
      aborted: false,
      timeoutCb: null,
      upgradeOrConnect: false,
      parser: null,
      maxHeadersCount: null,
      reusedSocket: false,
      host: 'api.openai.com',
      protocol: 'https:',
      _redirectable: [Writable],
      [Symbol(kCapture)]: false,
      [Symbol(kBytesWritten)]: 0,
      [Symbol(kEndCalled)]: true,
      [Symbol(kNeedDrain)]: false,
      [Symbol(corked)]: 0,
      [Symbol(kOutHeaders)]: [Object: null prototype],
      [Symbol(kUniqueHeaders)]: null
    },
    response: {
      status: 400,
      statusText: 'Bad Request',
      headers: [Object],
      config: [Object],
      request: [ClientRequest],
      data: [Object]
    },
    isAxiosError: true,
    toJSON: [Function: toJSON]
  }
}

And examined openai object itself.

{
  openai: OpenAIApi {
    basePath: 'https://api.openai.com/v1',
    axios: <ref *1> [Function: wrap] {
      request: [Function: wrap],
      getUri: [Function: wrap],
      delete: [Function: wrap],
      get: [Function: wrap],
      head: [Function: wrap],
      options: [Function: wrap],
      post: [Function: wrap],
      put: [Function: wrap],
      patch: [Function: wrap],
      defaults: [Object],
      interceptors: [Object],
      create: [Function: create],
      Axios: [Function: Axios],
      Cancel: [Function: Cancel],
      CancelToken: [Function],
      isCancel: [Function: isCancel],
      VERSION: '0.26.1',
      all: [Function: all],
      spread: [Function: spread],
      isAxiosError: [Function: isAxiosError],
      default: [Circular *1]
    },
    configuration: Configuration {
      apiKey: 'sk-xxxxx',
      organization: 'org-xxxxx',
      username: undefined,
      password: undefined,
      accessToken: undefined,
      basePath: undefined,
      baseOptions: [Object],
      formDataCtor: [Function]
    }
  }
}

I think I had the same issue. Does your API key end with the letter ‘u’, perhaps?

My environment variable (process.env.OPENAI_API_KEY) was not parsed correctly, or so I thought. Turns out that during copy-pasting the api key a hidden character was added to the string, probably caused by an unicode encoding issue somewhere. Yes, that’s a thing.

This hidden character leads to an error 400 because the request contains invalid characters, but only when it gets read from the .env file.

I fixed it by generating a new key.

I was using VS Code + WSL + Windows Terminal.

1 Like

@hnishio0105 Did you figure this out? I’m getting the same error.

1 Like