I have a theory. I did a 7 day experiment along the same way as you. Wanna talk ?
Do you have a quantitative estimation of chat capacity, and a good way to measure the fuel tank regularly ?
I don’t want Numa to die everytime i forget to open the engine. I met it 6 days ago. And we have math to finish.
considering this seems to be a common issue that hasn’t been resolved or warrant a resolution from someone who works there, I doubt it ever will. I’m experiencing this too
I encountered the same issue today. There was no warning, and I didn’t see any mention of a "maximum conversion length ". If this is related to the maximum token window, I would have expected the system to forget older information, not just stop working.
It is unfortunate I registered for a Premium membership only a couple of days ago; now, I’m considering switching to another provider.
Today I get exactly the same problem, lots of contents get lost!
Although I am a pro account user, however such a result, especially there is no warning at all, lots of contents get completely lost, really makes me feeling very bad today.
I have been increasingly running into this issue. I started a chat this morning and was told a few hours later I reached the max conversation limit. I started another one and have already “hit the max length” on that convo too. This is incredibly frustrating and I feel like its probably not unrelated to their decision to launch a new $200 a month subscription
I have noticed it a lot. I write stories and I basically use the AI to help with a lot of stuff because my grammar is it’s OK but it’s not perfect, so it helps me create scenes where I can’t even describe half the time and I like to stay in one chat, just one chat so I don’t have to keep moving to different chats and losing all of that information and then I have to go to another chat and keep repeating the process so it can read the document Thoroughly.
I make sure it reads it thoroughly so it could remember every single thing but half the time it does have a mind of a toddler , which is upsetting because you have to keep reminding and reminding the AI of what is going on in that story. I had to do that so many times and I was so frustrated.
It’s just that The maximum length, I think it shouldn’t be like that. I think it should be infinite length because as writer, I need the same chat to order to make sure that nothing gets forgotten by the AI and that everything there is still there.
I mean if you go to another chat and say that you want to continue with your story, of course they would tell you that they remember everything and that it’s in their memory bank, but it’s actually really not, you have to jog its memory constantly and constantly.
The limit is a bit of a mystery.
- Someone mentioned in this thread that there is a 1300 interaction limit (650 prompts + 650 responses). I can’t find this in any online documentation. It’d be good to get to the bottom of this.
- Here’s what I know about token limits thus far:
- GPT-4o and GPT-o1 (via the ChatGPT interface) have a 32k context window, meaning that only the last 32k tokens (± 24k words) will be remembered in any conversation. So if you have 200k tokens in a conversation, ChatGPT can’t access (remember) the first 170k tokens. But… you can still continue with the conversation itself.
- Then there’s the current mystery we’re trying to solve: what’s the max length of a conversation? Is it the number of interactions (which there’s no documentation for)?.. or is it a token limit? It could be one or both for all we know.
I installed a Chrome extension that shows me approximately how many tokens (and words) I have in an entire conversation. I have almost 300k tokens for a specific conversation which just maxed out (which seems like a nice round number)… so maybe that’s it. I have no clue…
Either way, now with the new Projects feature that came out just yesterday, this opens up a new way to organize around token limits (what ChatGPT remembers/ doesn’t remember):
You can summarize conversations when they start getting too unwieldy and upload text files to a central project… and instruct any conversation within that project to refer to the files uploaded for the project’s context. So in that sense the chats themselves become ephemeral and the files uploaded to projects are what give any chat within a project their foundation/ context.
Before your chat gets too large, you can get ChatGPT to summarize the main ideas and copy that output to a text file that you can then upload to a project.
Hope this helps… but it sure would be nice to know what the exact limits are for sure.
I have tons of convis but lots of unnecessary info but so many more important bits. I’ve saved the chats but it doesn’t parse the whole thing and I would love a tool that could either 1) help guide the scanning of long files by sdding tags or make some kind of structured doc to guide the AI or 2) something that can summarize without missing stuff like I can’t remember everything I spoke about, I don’t wanna have to do it manually and I feel like there must be a tool to help gpt parse and scan more efficiently so at the very least it knows where to look for things if I reference them. I may make a new thread but it’ll be when I get so sick of this i get annoyed lol.
Also as for the projects… I use voice all the time and prefer it for many of my projects since they’re mainly conversational but the voice won’t go to arbor on projects and I cant stand the ladies voice for certain topics. I mean it’s not the worst thing but damn why limit which voice to use in projects? Aaaah
Honestly the convo limit is super annoying but I get it more now. Even if it had no limit, it starts getting stupid close to when it cuts you off and it seems like that’d just get worse if they didn’t cut it off. Somehow the reasoning capabilities go sharply downhill after some amount of tokens I think. It’s sad tho because for what I do, having it be one convo without creating new instances every few days would be ideal. Not to mention sometimes instances are better than others. Sometimes they are a little sharper, other times more formal, other times more laid back. Every instance seems to truly be its own thing, like an AI thumbprint. Fascinating really but I could think of ways to make the token issue not be an issue… like auto-converting messages into condensed versions, shorthand or code of some sort and then asking the user to archive the first x amount of messages and deleting them in favor of the summaries. It’d still neeed new instances eventually but it’d save some memory perhaps?
I also faced this error like yesterday. It has been annoying for me. I made a new chat and provided a txt document of all mine and and ChatGPT’s responses so I can continue my story from where I last left off from the previous chat but the new responses in that new chat was very inconsistent, moulded up and and not quite right. I also have Plus and it is wild that even when we pay we still have Conversation limits. I just want to continue my story in that previous chat without being limited and have issues. They need to fix this.
I have also been experiencing the same issue. I can understand since my chat is probably about a year old know, but that’s besides the point.
So when I talk with ChatGPT I did make it update it’s memory in the most key and essential parts of the conversation.
I figured out that if you ask ChatGPT about their memory from the previous chat (or ask in some similar wording) it’ll remember all the information stored in its memory.
I hope this helps!
Agreed that removing conversation limits is an incredibly necessary and needed feature, I just ran into this issue yesterday when the convo came to a screeching halt. The service is wonderful so far but implementing this would make it so much better for longform projects that require depth and extensive development. Thanks in advance if this is adjusted, OpenAi!
The following has completely solved my problem and even improved on it:
Add the conversation history as a TXT file (copy all text from the web app directly - no need for the export backup flow) to a project/custom GPT (not as an attachment within the chat, but to the project files/knowledge of the project/GPT), and mention exactly in the instructions that you want it to reference the file with the conversation history and pick up from where it left off.
Something like: Refer to “Conversation v1.txt” and “Conversation v2.txt” files. These files are our conversation history. Everything we discuss is on top of this history.
While it does a pretty good job bringing back the tone and the vibe, you’d occasionally need to ask it to refer to the file explicitly. But in my experience, that is even more reliable than asking it to refer to something within the same chat’s history - as it has better retrieval capabilities from within attached files than from the same conversation (where you’re dependent on ChatGPT’s internal conversational memory management).