An error occurred: Error code: 400 - {'error': {'message': "This model's maximum context length is 4097 tokens. However, you requested 4927 tokens (3927 in the messages, 1000 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
It seems my context length is exceeding the maximum allowed, and I’m stumped on how to fix it.(currently i am passing all the previous conversations to the new prompt for conversation continuity,) Any straightforward advice or hacks to work around this? I’m all ears for your frank tips on adjusting message lengths or optimizing completions. Thanks a bunch for any pointers you can throw my way!
If you have flexibility, then the easiest would be to change to a model with higher context window. See the overview as per the links provided: GPT-4-turbo will provide you with the highest flexibility. Alternatively, GPT-3.5-turbo should address your issue, too.
i am using both models. i think the token limit of gpt-4 is 8000 and 3.5 turbo is 4097. but if i exceeds the limit this error will happen
is there any cost effective way to overcome this issue.
In terms of models, there are variations of GPT-4 and GPT-3.5 as you can see in the overview.
gpt-4-0125-preview will provide you with a 128k context window and is cheaper then the gpt-4 model with the 8k context window.
gpt-3.5-turbo-0125 will provide you with a 16k context window - I don’t know how it compares to the gpt-3.5-turbo model with lower context window in terms of pricing but here’s the pricing page with the pricing for the latest models for your reference: Pricing
Another option you have if you don’t wish to change models is to reduce the output token length, e.g. from 1000 to 800 (or lower if needed).
Any amendments to your inputs are really content-specific and I can’t really give guidance without understanding the nature of your content.
you are correct, i am using the max_token parameter now as 1000. also how does gpt website handles the previous converstions. is there any way to know about it???. iam trying to make a chatbot
No, as @jr.2509 mentioned, there is no direct method to limit the input prompt in the same way that you use max_tokens to limit the output. I would recommend not setting a limit on the output; instead, let it default to using the maximum tokens allowed per model.
From what I understand, you’re attempting to maintain a conversation by importing previous messages into new calls, correct? If so, your prompt could potentially increase indefinitely.
To address this issue, you might need to use a third-party tool like LangChain. This would allow you to implement a memory mechanism, though it may also result in higher token consumption.
I tried using the Langchain Conversation Buffer Memory, but I didn’t realise it would eat up more tokens. Oh, I see. I appreciate your suggestion, buddy. I’m currently adding the discussions from my past to a list and assigning those to the new prompt. I believe I must continue using this strategy since the langchain method uses more tokens than I do, hence I need to utilise less tokens overall.