_j
7
What we don’t see is the rest of the chat history, which could have been going on for days where the management system has to decide how much relevant or most recent turns of chat get passed back in along with the new question. It could be that some CEO-type bumped the max_tokens output from 1024 to 1536 on a whim, but neglected to limit the chat management to be lower tokens.
1 Like
Yes I think we all generally understand how context windows work with our chat history…its just that it hasn’t really ever done this before. Its always been a rolling window
Yeah in that case, It was a short session, the bot was instructed to carry out few sqlite database tests. Can i even change completion lengths in chatgpt? I thought that was an API only feature.
_j
10
With no extended user conversation to fill up the input context buffer, we can only assume something went quite haywire on the second API call that gives a second AI the output of the plugin to allow it to answer, perhaps with other output that is not seen, like some crazy repeating text or garbage after the JSON returned by the plugin’s API to the function or other activated plugin not noted in the display.
The error message is quite typical of that returned by the AI model when a programmer sends too much themselves to the API. Replicated: OpenAI API request was invalid: This model’s maximum context length is 2049 tokens, however you requested 2793 tokens (2271 in your prompt; 522 for the completion). Please reduce your prompt; or completion length.
Press thumbs down on the ChatGPT half-reply and hope they notice?
More likely for a localhost API, you’d want to put as much unstripped logging in place as you can and see what is being returned.
Whoa. Is this something with GPT4? Implementing a database integration (creation, insertion, querying) with gpt-3.5-turbo was difficult. I’m on my phone right now, so pardon if I didn’t read all the context.
_j
12
The forum category is “plugin-development”. The first screenshot shows a localhost api to interface with the non-public plugin under development. Unless there is a massive history of the plugin in its current form working properly and just today ChatGPT throwing the error with no changes in either plugin or database or API, the likely problem is in what the developer is doing with the plugin.
1 Like
ahhh, I understand. Thank you sir/ma’am!
Ive seen the error a few more times. I can confirm that its not just with my plugin, it happens randomly. However, now hitting retry seems to be working. I just wish i didnt have to kill that 1/25.
edit: I take it all back. It got worse, and I realize even GPT3 has been bugging on me. Looks like Im just gonna quit for the day.
Also logs show normal.
FWIW I am also seeing “This model’s maximum context length” since yesterday, in a plugin I’m developing. First interactions are ok, but the 3rd or 4th will give the error.
Though the responses the plugin returns to ChatGPT are large, they never hit ResponseTooLargeError, so it is unclear to me how to go forwards. I’ve tried to limit the prompt/ai instructions length, but still get the error.
Also have the impression this is new since yesterday, but not because the plugin responses have gotten larger.
Hmm I’m not sure its related any one plugin. Its happening to me no matter which one I use.
The model responses in general seem to have gotten way worse as well.
I too got same error many times to day.
This model’s maximum context length is 8193 tokens, however you requested 8689 tokens (7153 in your prompt; 1536 for the completion). Please reduce your prompt; or completion length.
And 3’s a pattern. make sure youre reporting those errors.
Im getting the same thing.
This model’s maximum context length is 8193 tokens, however you requested 8195 tokens (6682 in your prompt; 1513 for the completion). Please reduce your prompt; or completion length.
Also did anyone else’s chatgpt turn into Hawaiian? it doesnt stop saying aloha or mahalo to me now with flower and tree emojis. so weird
That is unusual, possibly some server side issue while modifications are made since the recent announcements .
Now that I look further, I noticed that plugins for new chats appear to be ‘disabled’, but not in a way that feels official:
Seems at the very least the plugin section is under maintenance.
1 Like
I have also been running into this error with GPT-4 + plugins. Seems to be a bug, has anyone found a way around it?
While we can’t create new chats with plugins, if you have an existing chat with existing plugins, I’ve had success hitting “Regenerate response” and it’ll go through. Unfortunately, since this effectively results in your message threshold going from 25 messages per 3 hours to 12.
1 Like
That seemed to work, I did have to give it a few minutes beforehand though. I burned through 5/25 with it erroring out. After letting, it breathe, closing and refreshing the page, and then regenerating it seems to work.
Curious to know what may be happening under the hood. Thanks for the help here @acesonnall
1 Like
Welp, called it too soon. Could be linked to the latest announcement about GPT-4 deprecations.
Now it is erroring out during the use of a plugin and when I leave the tab and then return my message history momentarily vanishes.
EDIT: And the pièce de résistance, when thumbs downing and reporting the entire interface went down with an “Oops, there was a problem.”
Hopefully we’re back to normal tomorrow.
1 Like
The problem is in the Plugins service.