Error retrieving 'completions': 400 Bad Request

I keep getting {Error retrieving ‘completions’: 400 Bad Request} on some prompts that work fine otherwise:

for eg: Which phrase is most similar to “Health and beauty shop” out of the following: “Lottery Shop”,“Beauty Supplies”,“Health Care Service”,“Medicinal And Recreational Dispensary”,“Other Repair Shops”,“Beauty Salon”,“Newsagents And Tobacconists”?

Is there a limit on the size of the prompt or the number of items the question is for? I’m at a loss here.

This error message suggests that there is a problem with the request that is being sent to the API. “Error retrieving ‘completions’: 400 Bad Request” typically indicates that the API is unable to process the request due to an issue with the syntax or structure of the request.

A common reason of the error 400 is that the prompt exceed the limit of character allowed by the API.

I recommend you check the API documentation to see if there are any limits on the size of the prompt or the number of items in the question. If there are limits, they should make sure that the prompt they are sending meets those requirements.

Also, you can try rephrasing the question and make sure that the prompt is in a clear and structured format. This way, they might be able to determine if the issue is with the question or with the underlying service.


Here’s the token limits for the models…

Do you just get the error occasionally on any prompt? Or is it specific prompts that cause the error?

I was experiencing this issue for a while. Some of my long prompts would return a 400 error when I included a 4000 token limit. Try reducing the tokens for the request and see if this works.

I reduced the max_tokens parameters from 4000 to 2200, and this fixed the issue.

1 Like

Thanks, it solved it for me too. But is a strange error. Maybe a bug?

I am facing the same issue. In my case, I started with 2200 everything worked but moved to 4000 none of the requests worked but now when I move back down to 2200, none of the requests are working so I had to move back to lower limit e.g. 1250 and then it is working. I do not understand what kind of weird issue is this and also there is no help in documentation regarding this. Can anybody please help? I need a higher token limit as some of my prompts can be quite long.

1 Like

The total tokens must be less than the model’s maximum context length or it will spit out this error.

Another way to say it is that the number of input tokens you provide plus what you set max_tokens to must sum to less than or equal to the model’s maximum content length.

So if the model’s maximum context length is 4097, and you set max_tokens to 4000, your input text must contain 97 tokens or less.



Here is an illustration of this from my test lab:

This model’s maximum context length is 4097 tokens, however you requested 4102 tokens (6 in your prompt; 4096 for the completion). Please reduce your prompt; or completion length.

Screenshot of Completion Error:

FYI: The Completion Setup (4096 max_tokens)


OpenAI really need to explain this everywhere they talk about tokens!

I am getting the same error but for creating an image. The request does not have any tempearture or dotenv variables. I checked the last letter of my key as well its the same. What do you think is the error in the code?

const { Configuration, OpenAIApi } = require(“openai”);
const configuration = new Configuration({
apiKey: key,
const openai = new OpenAIApi(configuration)“/images”, async (req,res)=>{
const response = await openai.createImage({
prompt: “A cute baby sea otter”,
n: 2,
size: “1024x1024”,

1 Like

This indeed is a token issue.
If you exceed max token input threshold for a model, OpenAI API throws 400.

Same is the case with models used. If you pass a wrong model name, OpenAI throws 400.

Somehow the error fails to explain what the real issue is.
It should have been more empathetic for developers.

A good hack is to use playground to check if your inputs are valid. It shows you all the errors with appropriate message