I was having this exact same problem and I’ve just fixed it. I know this is really strange… but I fixed it by changing the temperature parameter in the request to 0. I’m not sure how or why this fixed it, but once I changed it, I no longer get 400 bad request. Check your parameters and tweak them and see if the 400 goes away.
I’m also experiencing this issue. I have a very simple pass-through API endpoint in NextJS using the OpenAPI Node package. I’ve double-checked my keys, billing, etc. However, adding max_tokens=1024 to it in any position causes an error. Adding temperature and other parameters didn’t fix it. I would include images showing the example, but I can only upload one image as a new user.
However, I’m able to get a cURL to work just fine, which leads me to think this may be an issue in the node package.
I confirmed this by testing this request:
export default async function handler(
req: NextApiRequest,
res: NextApiResponse<OpenAPICompletion>
) {
const requestOptions = {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: "Bearer OMITTED",
},
body: JSON.stringify({
model: "text-davinci-003",
prompt: "Please provide 1024 tokens of test data",
max_tokens: 1024,
}),
};
await fetch("https://api.openai.com/v1/completions", requestOptions)
.then(async (response) => {
res.send(await response.json());
})
.catch((error) => {
console.error("Error on request:", error);
res.send(error);
});
// console.log("req.query", req.query);
// try {
// const response = await openai.createCompletion(req.query);
// res.send(response.data as CreateCompletionResponse);
// } catch (e) {
// console.log("Error on request", e);
// res.send(e);
// }
}
Which returns the expected result:
Yes, as I mentioned at the top, the temperatures, etc. that you put in the environment variables need to be converted to numbers, and in my case, that fixed it.
I had the same issue. The cause of the error was that I was using a wrong field name for max tokens.
Bad
data: '{"model":"gpt-4","messages":[{"role":"user","content":"Generate a random number between 1 and 10"}],"maxTokens":100,"temperature":0.5}'
Good
data: '{"model":"gpt-4","messages":[{"role":"user","content":"Generate a random number between 1 and 10"}],"max_tokens":100,"temperature":0.5}'
For me it was a stupid mistake of passing in the string “0” instead of the number 0 as the temperature parameter.
Make sure all your input values are of the correct type and format.
Hi all,
I am trying to fetch from dall.e and stuck with the same error.I try with two different api keys,but the result is 400 and the axios error from openai.Can someone give me a suggestion where is the problem
` const handleSubmit = async (type) => {
if (!prompt) alert(“Please enter a prompt”);
try {
setGeneratingImg(true);
const response = await fetch("http://localhost:8080/api/v1/dalle", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ prompt }),
});
const data = await response.json();
handleDecals(type, `data:image/png;base64,${data.photo}`);
} catch (error) {
alert(error);
} finally {
setGeneratingImg(false);
setActiveEditorTab("");
}
};`
I got the same error today. I think it’s because the request size was too large.
Look at the full response body, it should have details on what specifically went wrong. If you’re unable to figure it out you can start a new thread with your code and error.
Hey, I am having the same doubt. Did you fix it ?
@bhanupsc12 Welcome, see previous message. 400 status is used for a wide variety of errors, all that require different fixes. The response body will have specific error message.
Does anyone have a possible fix for this?
Same issue here.
cc: @logankilpatrick
@developerayo There are many reasons you’ll get a 400 error, nearly all of them due to issues in your code. The full response body will have a detailed error message on what specifically is wrong. Look at the response body and share what error code you are getting if you need help solving it.
I got kind of the same error and im not sure how i can fix it, it started with this message:
Click triangle to expand
TypeError: Cannot read properties of null (reading ‘length’)
at Object.getMessageById (file:///C:/Users/Frynox/Desktop/Virtualia-BOT/Backend/src/services/messageServices.js:133:24)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async getMessageAndConversation (file:///C:/Users/Frynox/Desktop/Virtualia-BOT/Backend/src/services/openaiServices.js:110:19)
at async generateGptAnswer (file:///C:/Users/Frynox/Desktop/Virtualia-BOT/Backend/src/services/openaiServices.js:135:21)
at async getGptAnswer (file:///C:/Users/Frynox/Desktop/Virtualia-BOT/Backend/src/controllers/openaiController.js:6:22)
Then it just fixed itself and worked again normally, but at some point i dont know everything started to work bad, so i got a console.log(error) and i got this whole message:
Error: Request failed with status code 400
at createError (C:\Users\Frynox\Desktop\Virtualia-BOT\Backend\node_modules\openai\node_modules\axios\lib\core\createError.js:16:15)
at settle (C:\Users\Frynox\Desktop\Virtualia-BOT\Backend\node_modules\openai\node_modules\axios\lib\core\settle.js:17:12)
at IncomingMessage.handleStreamEnd (C:\Users\Frynox\Desktop\Virtualia-BOT\Backend\node_modules\openai\node_modules\axios\lib\adapters\http.js:322:11)
at IncomingMessage.emit (node:events:525:35)
at IncomingMessage.emit (node:domain:489:12)
at endReadableNT (node:internal/streams/readable:1359:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
config: {
transitional: {
silentJSONParsing: true,
forcedJSONParsing: true,
clarifyTimeoutError: false
},
adapter: [Function: httpAdapter],
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 0,
xsrfCookieName: ‘XSRF-TOKEN’,
xsrfHeaderName: ‘X-XSRF-TOKEN’,
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: {
Accept: ‘application/json, text/plain, /’,
‘Content-Type’: ‘application/json’,
‘User-Agent’: ‘OpenAI/NodeJS/3.3.0’,
Authorization: ‘Bearer s1Z’,
‘Content-Length’: 2098
},
method: ‘post’,
data: ‘{“model”:“gpt-3.5-turbo”,“messages”:[{“role”:“system”,“content”:“\n Eres Phone Store, una empre”}.\n "},,{“role”:“user”,“content”:“hola”}],“temperature”:0.1}’,
url: ‘https://-api.openai.com/v1/chat/completions’
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype] {
abort: [Function (anonymous)],
aborted: [Function (anonymous)],
connect: [Function (anonymous)],
error: [Function (anonymous)],
socket: [Function (anonymous)],
timeout: [Function (anonymous)],
finish: [Function: requestOnFinish]
},
_eventsCount: 7,
_maxListeners: undefined,
outputData: ,
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 2098,
_hasBody: true,
_trailer: ‘’,
finished: true,
_headerSent: true,
_closed: false,
socket: TLSSocket {
_tlsOptions: [Object],
_secureEstablished: true,
_securePending: false,
_newSessionPending: false,
_controlReleased: true,
secureConnecting: false,
_SNICallback: null,
servername: ‘-api.openai.com’,
alpnProtocol: false,
authorized: true,
authorizationError: null,
encrypted: true,
_events: [Object: null prototype],
_eventsCount: 10,
connecting: false,
_hadError: false,
_parent: null,
_host: ‘.api.openai.com’,
_closeAfterHandlingError: false,
_readableState: [ReadableState],
_maxListeners: undefined,
_writableState: [WritableState],
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: ‘’,
server: undefined,
_server: null,
ssl: [TLSWrap],
_requestCert: true,
_rejectUnauthorized: true,
parser: null,
_httpMessage: [Circular *1],
[Symbol(res)]: [TLSWrap],
[Symbol(verified)]: true,
[Symbol(pendingSession)]: null,
[Symbol(async_id_symbol)]: 1748,
[Symbol(kHandle)]: [TLSWrap],
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBuffer)]: null,
[Symbol(kBufferCb)]: null,
[Symbol(kBufferGen)]: null,
[Symbol(kCapture)]: false,
[Symbol(kSetNoDelay)]: false,
[Symbol(kSetKeepAlive)]: true,
[Symbol(kSetKeepAliveInitialDelay)]: 60,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0,
[Symbol(connect-options)]: [Object]
},
_header: ‘POST /v1/chat/completions HTTP/1.1\r\n’ +
‘Accept: application/json, text/plain, /\r\n’ +
‘Content-Type: application/json\r\n’ +
‘User-Agent: OpenAI/NodeJS/3.3.0\r\n’ +
‘Authorization: Bearer apikey\r\n’ +
‘Content-Length: 2098\r\n’ +
‘Host: .api.o.penai.com\r\n’ +
‘Connection: close\r\n’ +
‘\r\n’,
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: Agent {
_events: [Object: null prototype],
_eventsCount: 2,
_maxListeners: undefined,
defaultPort: 443,
protocol: ‘https:’,
options: [Object: null prototype],
requests: [Object: null prototype] {},
sockets: [Object: null prototype],
freeSockets: [Object: null prototype] {},
keepAliveMsecs: 1000,
keepAlive: false,
maxSockets: Infinity,
maxFreeSockets: 256,
scheduling: ‘lifo’,
maxTotalSockets: Infinity,
totalSocketCount: 1,
maxCachedSessions: 100,
_sessionCache: [Object],
[Symbol(kCapture)]: false
},
socketPath: undefined,
method: ‘POST’,
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: ‘/v1/chat/completions’,
_ended: true,
res: IncomingMessage {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 4,
_maxListeners: undefined,
socket: [TLSSocket],
httpVersionMajor: 1,
httpVersionMinor: 1,
httpVersion: ‘1.1’,
complete: true,
rawHeaders: [Array],
rawTrailers: ,
joinDuplicateHeaders: undefined,
aborted: false,
upgrade: false,
url: ‘’,
method: null,
statusCode: 400,
statusMessage: ‘Bad Request’,
client: [TLSSocket],
_consuming: false,
_dumped: false,
req: [Circular *1],
responseUrl: ‘h.ttps://api.openai.com/v1/chat/completions’,
redirects: ,
[Symbol(kCapture)]: false,
[Symbol(kHeaders)]: [Object],
[Symbol(kHeadersCount)]: 34,
[Symbol(kTrailers)]: null,
[Symbol(kTrailersCount)]: 0
},
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: ‘.api.o.penai.com’,
protocol: ‘https:’,
_redirectable: Writable {
_writableState: [WritableState],
_events: [Object: null prototype],
_eventsCount: 3,
_maxListeners: undefined,
_options: [Object],
_ended: true,
_ending: true,
_redirectCount: 0,
_redirects: ,
_requestBodyLength: 2098,
_requestBodyBuffers: ,
_onNativeResponse: [Function (anonymous)],
_currentRequest: [Circular 1],
_currentUrl: ‘https://.api.openai.com/v1/chat/completions’,
[Symbol(kCapture)]: false
},
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype] {
accept: [Array],
‘content-type’: [Array],
‘user-agent’: [Array],
authorization: [Array],
‘content-length’: [Array],
host: [Array]
},
[Symbol(errored)]: null,
[Symbol(kUniqueHeaders)]: null
},
response: {
status: 400,
statusText: ‘Bad Request’,
headers: {
date: ‘Mon, 14 Aug 2023 15:10:13 GMT’,
‘content-type’: ‘application/json’,
‘content-length’: ‘154’,
connection: ‘close’,
‘access-control-allow-origin’: '',
‘openai-organization’: ‘user-yaa3xjjhheofgrnln48oywjc’,
‘openai-processing-ms’: ‘5’,
‘openai-version’: ‘2020-10-01’,
‘strict-transport-security’: ‘max-age=15724800; includeSubDomains’,
‘x-ratelimit-limit-requests’: ‘3500’,
‘x-ratelimit-remaining-requests’: ‘3499’,
‘x-ratelimit-reset-requests’: ‘17ms’,
‘x-request-id’: ‘b724d5116930dcef6dfd4206c71494a3’,
‘cf-cache-status’: ‘DYNAMIC’,
server: ‘cloudflare’,
‘cf-ray’: ‘7f6a1f9458f6498e-MIA’,
‘alt-svc’: ‘h3=“:443”; ma=86400’
},
config: {
transitional: [Object],
adapter: [Function: httpAdapter],
transformRequest: [Array],
transformResponse: [Array],
timeout: 0,
xsrfCookieName: ‘XSRF-TOKEN’,
xsrfHeaderName: ‘X-XSRF-TOKEN’,
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: [Object],
method: ‘post’,
data: ‘{“model”:“gpt-3.5-turbo”,“messages”:[{“role”:“system”,“content”:"\n Eres Phone Store, una empresa }.\n "},,{“role”:“user”,“content”:“hola”}],“temperature”:0.1}’,
url: ‘h.ttps://api.openai.com/v1/chat/completions’
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype],
_eventsCount: 7,
_maxListeners: undefined,
outputData: ,
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 2098,
_hasBody: true,
_trailer: ‘’,
finished: true,
_headerSent: true,
_closed: false,
socket: [TLSSocket],
_header: ‘POST /v1/chat/completions HTTP/1.1\r\n’ +
‘Accept: application/json, text/plain, /\r\n’ +
‘Content-Type: application/json\r\n’ +
‘User-Agent: OpenAI/NodeJS/3.3.0\r\n’ +
‘Authorization: Bearer ■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■blJFzJ1Z\r\n’ +
‘Content-Length: 2098\r\n’ +
‘Host: api.openaicom\r\n’ +
‘Connection: close\r\n’ +
‘\r\n’,
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: [Agent],
socketPath: undefined,
method: ‘POST’,
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: ‘/v1/chat/completions’,
_ended: true,
res: [IncomingMessage],
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: ‘.api.openai.com’,
protocol: ‘https:’,
_redirectable: [Writable],
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype],
[Symbol(errored)]: null,
[Symbol(kUniqueHeaders)]: null
},
data: { error: [Object] }
},
isAxiosError: true,
toJSON: [Function: toJSON]
}
Look at the full response you’ll get a more descriptive error message. Wrap your code in a try catch
like this and get the message to share here if you still need help resolving the issue.
try {
const response = await openai.createChatCompletion({
<...rest of your code here....>
} catch (error) {
console.error(error.response.data ?? error.message);
}
thank you very much, i got the following message
error: {
message: “[ ] is not of type ‘object’ - ‘messages.1’”,
type: ‘invalid_request_error’,
param: null,
code: null
}
}
Oh god nevermind, i was using kind of a memory file where it saves the conversation so OpenAI can respond using it, but i modified the file and i had object[]; hahaha
Hi, I am having an issue in Unity ios build for an app.
Currently, the openAI API i’m using to generate image in realtime, works fine on unity editor, but after building it to ios app, it doesn’t work at all. When I run the app on ipad I get this error message in logs. Can anyone please help me to fix this?
ArgumentException: The Object you want to instantiate is null.
UnityEngine.Object.Instantiate (UnityEngine.Object original, UnityEngine.Transform parent, System.Boolean instantiateInWorldSpace) (at <00000000000000000000000000000000>:0)
UnityEngine.Object.Instantiate[T] (T original, UnityEngine.Transform parent, System.Boolean worldPositionStays) (at <00000000000000000000000000000000>:0)
AnchorCreator.Update () (at <00000000000000000000000000000000>:0)
Utilities.WebRequestRest.RestException: [400] <color=“cyan”>“link” <color=“red”>Failed!
[Headers]
Content-Type: application/json
Access-Control-Allow-Origin: *
Alt-Svc: h3=“:443”; ma=86400
Server: cloudflare
cf-cache-status: DYNAMIC
Date: Wed, 23 Aug 2023 17:24:42 GMT
x-request-id: c93d711e63beda228faff53f1695599c
Strict-Transport-Security: max-age=15724800; includeSubDomains
openai-organization: user-3oh1479kaqjflcrojze9tuzc
Content-Length: 144
openai-version: 2020-10-01
cf-ray: 7fb50cf3baf2776b-LHR
openai-processing-ms: 34
[Data] 144 bytes
[Body]
{
“error”: {
“code”: null,
“message”: “‘prompt’ is a required property”,
“param”: null,
“type”: “invalid_request_error”
}
}
[Errors]
HTTP/1.1 400 Bad Request
Your code isn’t sending the prompt for some reason
Thanks for highlighting it. I also noticed that the prompt is not being sent but I still can’t seem to figure out the problem. It works perfectly fine when I run it in the unity editor. The request it being sent with the prompt and the image gets generated. But once I build the app and test it on my phone then it doesn’t get generated and I get this error instead. I have tested it on both the ios and android device.
I have been able to fix the problem. The build was stripping the types required for serialization. I had to change the Managed Stripping Level property in player settings from low to minimal and after rebuilding it worked in both ios and android.