I am fetching content for my plugin and sometimes get “The message you submitted was too long, please reload the conversation and submit something shorter.”
Is there anyway to catch this error? I would like to catch this error and then break down the content in multiple requests.
OpenAI may change the max length of messages so I cannot rely on a fixed character or word count. I would like to catch the error first. Is this possible?
I think you will need logics to manage the memory of the conversation, maybe summarize old conversations.
The length of your response counts as part of the context window limit of GPT4. Messages that are too long can be unhelpful. I assume that this will become a relic of the past as context windows will expand dramatically in the future, but for now, you need to manage your memory.
You have to count the length of your message and summarize it down to a manageable length via a summarization task. TurboGPT can do this, but it will cost you the tokens to process the document, so you have to keep costs in mind too. Another way to do this is by chunking parts of your context into a vector database and only returning the relevant parts.
Thank you! The issue is how to catch the message from ChatGPT when it returns message too long.
Thanks for the info! However the question is how to catch the “message too long error” from ChatGPT.
The length of a message is static. Test the maximum length GPT allows by simulating different response lengths, and then do the test on your end, no need to get a response from GPT.
This is not ideal because OpenAI will change the maximum response length. We have to assume that the max length can change anytime. For this reason we need a way to catch the lengthy message error. The question is how do we catch error messages from ChatGPT.
I don’t think there’s a way to see or catch what ChatGPT is sending back to the user, so I doubt there’s a way to catch the error. I remember seeing posts from people complaining about not being able to see the response ChatGPT sends the user a few weeks ago. Unless something has changed since then. I can’t see any response in my plugin when ChatGPT responds with an error.
Ok, maybe use tiktoken to count the tokens before sending the call.
I was thinking that, but I think his problem is that he doesn’t know how long the response will be, no matter how long the prompt is. And I think he just wants to catch errors generally.
As @arevolutionofone mentioned, i am just trying to catch the ChatGPT error. However i’m not sure if tiktoken would be useful. It seems like ChatGPT counts by characters not tokens as you would when using the API. Again, this all unofficial info i found online.
The API counts tokens if you’re using one of the models like text-davinci-003 or gpt-3.5-turbo. I believe ChatGPT does the same.
Now I’m sure it counts tokens.
Thanks, however this question is related to plugins which use GPT 4. Also you cannot trust details about this by asking ChatGPT. The training data cutoff is Sept 2021. Please post relevant, official answers.