openai.error.InvalidRequestError:

Issue: openai.error.InvalidRequestError: This model’s maximum context length is 4097 tokens. However, your messages resulted in 4275 tokens. Please reduce the length of the messages.

Hello Team,

I am across the error above, and have tried all possible solutions but could resolve the issue. I have tried to reduce limit as well, and played around with the config but I am not able to get to a resolution. Now my gpt is not even responding to “Hi” that the same error occurs.

The following is my config:
model=“gpt-3.5-turbo”,
TEMPERATURE = 0.5
MAX_TOKENS = 500 # tried 300 and 400 but did not help
FREQUENCY_PENALTY = 0
PRESENCE_PENALTY = 0.6

limits how many questions we include in the prompt

MAX_CONTEXT_QUESTIONS = 10

Please let me know if there anymore information that is needed?

Also, I am not sure how the tokens are being calculated, prompt"hi" are small characters then how does it reach limit, and the answer that I am expecting is short too.,

Help would be appreciated,

Best Regards!

Not sure what script/app you’re using, but my guess is MAX_CONTEXT_QUESTIONS

Any context is included in the prompt, and you’ve likely gone over the max prompt size.

1 Like

check this link from openai in how to calculate the tokens.

please note that
requests can use up to 4097 tokens shared between prompt and completion. If your prompt is 4000 tokens, your completion can be 97 tokens at most.

So in your case, “message” already exceeded 4097. message means:

const message = [
{ role: "system", content: your_system_prompt },
{ role: "user", content: "Hi" },
]

so perhaps your system prompt is too long. If not, then perhaps you are attaching previous conversations and they are already too long.

const message = [
{ role: "system", content: your_system_prompt },
// previous conversations here
{ role: "user", content: "Hi" },
]

Yes that part I understand I have exceeded the token limit, my question is now how do I resolve this issue now?

I thought I might have to clear my history, but how I have tried that in my code as well but no improvements,

So what is the resolution to this issue?

The issue has been resolved, or in other words, I understand what the issue is.

Thank you

Hi I am facing a similar, if you don’t mind could you please share how you resolved it? I am making the exact same API calls as I had been making earlier but now I am somehow prompted with the max_tokens error.

You’ll need to reduce the size of your prompt (including message history)

I ran into this error message and I thought it had to do with too much history being remembered or something like that since I was repeatedly calling the API with the same question with an appended one line piece of data for like 70,000 lines. Turns out that one of my input line strings was just too long - basically I had bad data and I hadn’t done the obvious of truncating each input line to something reasonable.

So for me the error message simply meant that the total question string length was longer than 8,192 characters. Easy enough to truncate the input to something more reasonable.

I also wrote the API call into a try block in case this happened again.

h[quote=“alijavaidistar, post:5, topic:256925, full:true”]
The issue has been resolved, or in other words, I understand what the issue is.

how you reloved that issue?