"This model's maximum context length is 8193 tokens" Does not make sense

I am confused.
This has happened a few times today.

2 Likes

You see, here’s the thing.

While that model’s maximum context length is 8193 tokens, you requested 8617 tokens of context by submitting 7081 in your prompt, and reserving 1536 tokens via the max_tokens parameter.

Or rather, the crummy software did that by not managing conversation history size effectively.

2 Likes

I’ve also started seeing this within the last 6 hours. Seems like a bug. It’s unfortunate because every error counts against the 25 submission limit.

1 Like

Oh good, then its not just me. Ive had to start over a couple of times, and i couldnt find a pattern to the error. So yeah I hope its temporary, then again… i suppose it would almost have to be.

1 Like

What conversation history are you passing in to the prompt?

Do you see the image? Thats it. Thats why im confused. Its a single sql command, and it was empty at that.

What we don’t see is the rest of the chat history, which could have been going on for days where the management system has to decide how much relevant or most recent turns of chat get passed back in along with the new question. It could be that some CEO-type bumped the max_tokens output from 1024 to 1536 on a whim, but neglected to limit the chat management to be lower tokens.

1 Like

Yes I think we all generally understand how context windows work with our chat history…its just that it hasn’t really ever done this before. Its always been a rolling window

Yeah in that case, It was a short session, the bot was instructed to carry out few sqlite database tests. Can i even change completion lengths in chatgpt? I thought that was an API only feature.

With no extended user conversation to fill up the input context buffer, we can only assume something went quite haywire on the second API call that gives a second AI the output of the plugin to allow it to answer, perhaps with other output that is not seen, like some crazy repeating text or garbage after the JSON returned by the plugin’s API to the function or other activated plugin not noted in the display.

The error message is quite typical of that returned by the AI model when a programmer sends too much themselves to the API. Replicated: OpenAI API request was invalid: This model’s maximum context length is 2049 tokens, however you requested 2793 tokens (2271 in your prompt; 522 for the completion). Please reduce your prompt; or completion length.

Press thumbs down on the ChatGPT half-reply and hope they notice?

More likely for a localhost API, you’d want to put as much unstripped logging in place as you can and see what is being returned.

Whoa. Is this something with GPT4? Implementing a database integration (creation, insertion, querying) with gpt-3.5-turbo was difficult. I’m on my phone right now, so pardon if I didn’t read all the context.

The forum category is “plugin-development”. The first screenshot shows a localhost api to interface with the non-public plugin under development. Unless there is a massive history of the plugin in its current form working properly and just today ChatGPT throwing the error with no changes in either plugin or database or API, the likely problem is in what the developer is doing with the plugin.

1 Like

ahhh, I understand. Thank you sir/ma’am!

Ive seen the error a few more times. I can confirm that its not just with my plugin, it happens randomly. However, now hitting retry seems to be working. I just wish i didnt have to kill that 1/25.

edit: I take it all back. It got worse, and I realize even GPT3 has been bugging on me. Looks like Im just gonna quit for the day.

Also logs show normal.

FWIW I am also seeing “This model’s maximum context length” since yesterday, in a plugin I’m developing. First interactions are ok, but the 3rd or 4th will give the error.

Though the responses the plugin returns to ChatGPT are large, they never hit ResponseTooLargeError, so it is unclear to me how to go forwards. I’ve tried to limit the prompt/ai instructions length, but still get the error.

Also have the impression this is new since yesterday, but not because the plugin responses have gotten larger.

Hmm I’m not sure its related any one plugin. Its happening to me no matter which one I use.

The model responses in general seem to have gotten way worse as well.

I too got same error many times to day.

This model’s maximum context length is 8193 tokens, however you requested 8689 tokens (7153 in your prompt; 1536 for the completion). Please reduce your prompt; or completion length.

And 3’s a pattern. make sure youre reporting those errors.

Im getting the same thing.

This model’s maximum context length is 8193 tokens, however you requested 8195 tokens (6682 in your prompt; 1513 for the completion). Please reduce your prompt; or completion length.

Also did anyone else’s chatgpt turn into Hawaiian? it doesnt stop saying aloha or mahalo to me now with flower and tree emojis. so weird

That is unusual, possibly some server side issue while modifications are made since the recent announcements .