Error: you requested 7409 tokens (1345 in the messages, 64 in the functions, and "6000" in the completion)

I use “gpt-3.5-turbo-0613”.
Following error occured:
“Error: This model’s maximum context length is 4097 tokens. However, you requested 7409 tokens (1345 in the messages, 64 in the functions, and 6000 in the completion)”.
I required response within “30 words”.
Tell me how solve this problem. This error never happen with “gpt-3.5-turbo”.

Would you mind to post the messages here?
Seems really strange.

What’s the max_tokens of the request?

i set max_tokens 4000 or did not set the value.

max_tokens = 4000 equals to completion value.
I set 6000, completion value became 6000.

1 Like

Sorry, I prompt by Japanese.
I tried to paste the github link. But I couldn’t send the message.
sota1111/tmp

That might cause the error.

Hiragana: Unicode: 3040-309F
Katakana: Unicode: 30A0–30FF

I mean, I saw something alike in chinese translation responses. I guess GPT-3 can’t identify words in japanese, or at least not reliable enough.

And cause of that max_tokens is taken.

You’ve put a 4k limit and the answer is transscripted to unicode?

@logankilpatrick ?

problem solved. max_tokens equals to completion value.
Thank you @jochenschultz and @kjordan :slightly_smiling_face:

1 Like