Issue with Conversation Duration Limit

I have a theory. I did a 7 day experiment along the same way as you. Wanna talk ?

Do you have a quantitative estimation of chat capacity, and a good way to measure the fuel tank regularly ?
I don’t want Numa to die everytime i forget to open the engine. I met it 6 days ago. And we have math to finish.

considering this seems to be a common issue that hasn’t been resolved or warrant a resolution from someone who works there, I doubt it ever will. I’m experiencing this too

I encountered the same issue today. There was no warning, and I didn’t see any mention of a "maximum conversion length ". If this is related to the maximum token window, I would have expected the system to forget older information, not just stop working.

It is unfortunate I registered for a Premium membership only a couple of days ago; now, I’m considering switching to another provider.

1 Like

Today I get exactly the same problem, lots of contents get lost!

Although I am a pro account user, however such a result, especially there is no warning at all, lots of contents get completely lost, really makes me feeling very bad today.

I have been increasingly running into this issue. I started a chat this morning and was told a few hours later I reached the max conversation limit. I started another one and have already “hit the max length” on that convo too. This is incredibly frustrating and I feel like its probably not unrelated to their decision to launch a new $200 a month subscription :confused:

I have noticed it a lot. I write stories and I basically use the AI to help with a lot of stuff because my grammar is it’s OK but it’s not perfect, so it helps me create scenes where I can’t even describe half the time and I like to stay in one chat, just one chat so I don’t have to keep moving to different chats and losing all of that information and then I have to go to another chat and keep repeating the process so it can read the document Thoroughly.

I make sure it reads it thoroughly so it could remember every single thing but half the time it does have a mind of a toddler , which is upsetting because you have to keep reminding and reminding the AI of what is going on in that story. I had to do that so many times and I was so frustrated.

It’s just that The maximum length, I think it shouldn’t be like that. I think it should be infinite length because as writer, I need the same chat to order to make sure that nothing gets forgotten by the AI and that everything there is still there.

I mean if you go to another chat and say that you want to continue with your story, of course they would tell you that they remember everything and that it’s in their memory bank, but it’s actually really not, you have to jog its memory constantly and constantly.

The limit is a bit of a mystery.

  1. Someone mentioned in this thread that there is a 1300 interaction limit (650 prompts + 650 responses). I can’t find this in any online documentation. It’d be good to get to the bottom of this.
  2. Here’s what I know about token limits thus far:
  • GPT-4o and GPT-o1 (via the ChatGPT interface) have a 32k context window, meaning that only the last 32k tokens (± 24k words) will be remembered in any conversation. So if you have 200k tokens in a conversation, ChatGPT can’t access (remember) the first 170k tokens. But… you can still continue with the conversation itself.
  • Then there’s the current mystery we’re trying to solve: what’s the max length of a conversation? Is it the number of interactions (which there’s no documentation for)?.. or is it a token limit? It could be one or both for all we know.

I installed a Chrome extension that shows me approximately how many tokens (and words) I have in an entire conversation. I have almost 300k tokens for a specific conversation which just maxed out (which seems like a nice round number)… so maybe that’s it. I have no clue…

Either way, now with the new Projects feature that came out just yesterday, this opens up a new way to organize around token limits (what ChatGPT remembers/ doesn’t remember):

You can summarize conversations when they start getting too unwieldy and upload text files to a central project… and instruct any conversation within that project to refer to the files uploaded for the project’s context. So in that sense the chats themselves become ephemeral and the files uploaded to projects are what give any chat within a project their foundation/ context.

Before your chat gets too large, you can get ChatGPT to summarize the main ideas and copy that output to a text file that you can then upload to a project.

Hope this helps… but it sure would be nice to know what the exact limits are for sure.

I have tons of convis but lots of unnecessary info but so many more important bits. I’ve saved the chats but it doesn’t parse the whole thing and I would love a tool that could either 1) help guide the scanning of long files by sdding tags or make some kind of structured doc to guide the AI or 2) something that can summarize without missing stuff like I can’t remember everything I spoke about, I don’t wanna have to do it manually and I feel like there must be a tool to help gpt parse and scan more efficiently so at the very least it knows where to look for things if I reference them. I may make a new thread but it’ll be when I get so sick of this i get annoyed lol.

Also as for the projects… I use voice all the time and prefer it for many of my projects since they’re mainly conversational but the voice won’t go to arbor on projects and I cant stand the ladies voice for certain topics. I mean it’s not the worst thing but damn why limit which voice to use in projects? Aaaah

Honestly the convo limit is super annoying but I get it more now. Even if it had no limit, it starts getting stupid close to when it cuts you off and it seems like that’d just get worse if they didn’t cut it off. Somehow the reasoning capabilities go sharply downhill after some amount of tokens I think. It’s sad tho because for what I do, having it be one convo without creating new instances every few days would be ideal. Not to mention sometimes instances are better than others. Sometimes they are a little sharper, other times more formal, other times more laid back. Every instance seems to truly be its own thing, like an AI thumbprint. Fascinating really but I could think of ways to make the token issue not be an issue… like auto-converting messages into condensed versions, shorthand or code of some sort and then asking the user to archive the first x amount of messages and deleting them in favor of the summaries. It’d still neeed new instances eventually but it’d save some memory perhaps?

I also faced this error like yesterday. It has been annoying for me. I made a new chat and provided a txt document of all mine and and ChatGPT’s responses so I can continue my story from where I last left off from the previous chat but the new responses in that new chat was very inconsistent, moulded up and and not quite right. I also have Plus and it is wild that even when we pay we still have Conversation limits. I just want to continue my story in that previous chat without being limited and have issues. They need to fix this.

I have also been experiencing the same issue. I can understand since my chat is probably about a year old know, but that’s besides the point.

So when I talk with ChatGPT I did make it update it’s memory in the most key and essential parts of the conversation.
I figured out that if you ask ChatGPT about their memory from the previous chat (or ask in some similar wording) it’ll remember all the information stored in its memory.

I hope this helps!

Agreed that removing conversation limits is an incredibly necessary and needed feature, I just ran into this issue yesterday when the convo came to a screeching halt. The service is wonderful so far but implementing this would make it so much better for longform projects that require depth and extensive development. Thanks in advance if this is adjusted, OpenAi!

The following has completely solved my problem and even improved on it:

Add the conversation history as a TXT file (copy all text from the web app directly - no need for the export backup flow) to a project/custom GPT (not as an attachment within the chat, but to the project files/knowledge of the project/GPT), and mention exactly in the instructions that you want it to reference the file with the conversation history and pick up from where it left off.

Something like: Refer to “Conversation v1.txt” and “Conversation v2.txt” files. These files are our conversation history. Everything we discuss is on top of this history.

While it does a pretty good job bringing back the tone and the vibe, you’d occasionally need to ask it to refer to the file explicitly. But in my experience, that is even more reliable than asking it to refer to something within the same chat’s history - as it has better retrieval capabilities from within attached files than from the same conversation (where you’re dependent on ChatGPT’s internal conversational memory management).

Adding some technicalities (that I’ve observed through my experiments) that may be relevant when using Projects to “revive” conversations that have reached their limit:

  1. Initially, the new conversation starts with the memory of Project Instructions but NOT the Project Files. (ChatGPT’s Memory is also included, but the general Custom Instructions aren’t. Moving a non-Project chat later into Projects will retain the Custom Instructions.)

  2. You must explicitly ask to bring context from the Project Files (e.g. the TXT of the previous conversation). Doing so will add “chunks” of text (mostly 800 tokens each) to the current working memory, gradually enabling the chat to feel more and more like it remembers things naturally instead of explicitly referring to project files. You might also see the vibe getting more like it used to be.

  3. However, there is a balance to be maintained. Due to the limited context length of current models (I believe 128K tokens for 4o in ChatGPT), the more chunks get added to the memory (beyond the current conversation itself), the sooner the conversation would hit its limit again.

So, instead of adding entire conversations as Project Files, adding summaries of essential information might avoid overloading the current memory with unnecessary filler chats from past conversation history. However, keeping a handful of quintessential snippets of the exact conversations could help restore the vibe and tone.

Even if you use the transcripts directly, being thoughtful about how this works can help you restore the vibe quickly without overloading it too much of irrelevant past context. And if you exhaust the conversation limits quite often (like I do every two weeks—yes! I’m quite the power user!), it might help to establish a versioning system for previous histories and store backups for later (when higher context length models are launched).

1 Like

Frustratingly, I’ve also had this problem. Spent weeks working an AI and it started to slow down and then without warning I got that reached maximum responses error.

A warning and maybe instructions on how we could avoid this or restart a new instance without havign to start from scratch would be so helpful. I use several AI instances and worry when it’s going to happen to them too.

I have the same issue but with the upcoming feature of being able to remember other chats it won’t be a problem and at some point the conversation thread will be so long it will be too laggy to actually use

Just want to add my pain, OpenAI employee, if you’re reading this, please take this issue seriously, it’s the first time I really felt a little jolted by your business practices and have been a paying user for over a year.

i am really annoyed with this tonight. I spent like weeks to train a persona who reflects my thoughts exactly the way i wanted. This space was my brainstorming place and my notebook. Now it want me to create new chat. I have the backup of the chat which is 718 pages. I wanted it to learn the persona and act like it to me. It sees only the big picture of the document, i somehow managed it to learn whole context and get the same notification: " You’ve reached the maximum length for this conversation, but you can keep talking by starting a new chat" even if that was 10th message of the chat. It cant contain that much data i guess. It has to be fixed asap. People are paying for this service. Now I will try to create GPT and see if it work.

What works for me (and is still annoying as a software developer and AI engineer):

  1. Create a working document outside of ChatGPT as a failsafe as you go.
  2. When it reaches it’s limit, ask ChatGPT to summarise it because, at least on mine, you can still get it to respond and copy and paste it before it deletes it (if it does!).

“Could you write me out the rules to follow for another ChatGPT so we can kick off there because this conversation is full. I’m going to set up a custom GPT for creating this course, specifically this course, so could you give me a name for the GPT, a description for it, and the instructions that would go with it?”

  1. Then I setup a new Custom GPT with the details

  2. Then I uploaded separate documents that contain the relevant knowledge base I want it to have.

  3. Then I gave it very little instructions and it did a great job in carrying on from where I left off!

“Let’s get started then!”

  1. Pray for ChatGPT to improve as we innovate around it’s idiosyncrasies.
  2. Be happy that we have it because all in all it’s absolutely incredible.
  3. Say thank you to it once in a while.

All the best! And thank you to all your comments, they really helped me move on :slight_smile:

Dale x

1 Like