The conversation is too long, please start a new one

Hi, is anyone else having this issue? I used GPT-4, and indeed it’s a long conversation. Recently I got that error message and the conversation won’t be able to continue. Not sure if there is a way to fix it, or getting more token etc, as I really don’t want to start a new conversation losing the data. Also it’s impossible to summarize the entire conversation to a new chat

3 Likes

I got the same message. I thought GPT4 allows users with no limit on the number of the message.

Even I’m facing the similar issue, Is there any way to solve this issue?

1 Like

Not really, you just have to make a really good prompt with all your stuff in a new chat
Or wait for them to uncap token level which is going to take ages

just happened to me and that sucks big time. I was making some Vue component and after hitting this error I was trying to get back and edit the older conversation so I could start in the middle. well that didn’t work and now I’ve “lost” my progress so far. This is bad openAI. We need to have some track of references we can make so we can find the important stuff from our conversation quickly. when some issue was solved I’d like to mark “modal not appearing solved” for an example.

Sorrry to throw in the topic not related to this but I’m still angry that I’v lost the tracking of it all because of the initial error “The conversation is too long, please start a new one”

It’s just weird that I have some other old conversations that are way, way longer that this one I had atm.

2 Likes

The conversation is too long, please start a new one

This spesific error is how i ended up in here. So a thanks to this spesific error for bringing me all the way here :slight_smile: I tried to overcome this error by editing prior of my contents so reduce the lenght of the chat, and it worked.
Within the same day, i tried many times to jump over that problem, but it did not work. So that spesific learning progress in that certain Chat session means a lot to me, might be holding valuable insights about the nature of human & AI interaction and the learning process.

  • It would be so nice to know if there is any way to continue within the same session, and if so, how can i ?
  • And it would be so nice if the user could see the limitations and maybe warned before it gets such a certain end to the session :face_in_clouds:
1 Like

Hi everyones,

We are in January 2024, and GPT is still, offently respond : “The conversation is too long, please start a new one.”

i’m very surprised, and honestly, it does not give any trust in the tool to start developp long collaborations on complex subjects. Maybe GPT4 is then, not the better tool for that…

Should I developp an A.I, or version of GPT4 on a local loop ? No clue.

Anyway, so to solve this massive issue :

  • Is it possible to copy and past again, in a new channel, the aaaallll conversation ?

  • Or is it better to restart from scratch, which means hours of work in the bin…

  • unless, there is any other solutions ?

Cheers to all

No one about this main subject … ?

Terrible…

Hello. I’ve been working on my private “GPT model” (GPT 4) for a light novel and got the same error, but I figured out why it was happening:

I sent approximately 100 pages of content to its “Knowledge” and asked in the conversation for it to read all of its knowledge before generating a “faithful dialogue”. It performed the reading, and I received the error message shortly after. So, it is related to the length of its responses (in the entire chat) and the length of what it analyzes, the content is getting ‘‘into his memory’’ and not getting out as newer content gets in, as it should be.

It’s related to the token length, so we need to wait for the GPT 4 Turbo release to make it work properly.

I’m facing the same issue, and it’s quite frustrating.
I’ve temporarily lost a conversation that’s crucial to me, and it’s impossible to make modifications in the middle of it. Perhaps the constant divergence between its understanding and my intentions causes me to generate new responses and branches repeatedly? I speculate that these branches might be consuming resources, contributing to the already limited length of the conversation. This is just my guess, as I’m not well-versed in AI. Maybe OpenAI could introduce a feature allowing users to trim unused branches, reclaim resources, and continue the conversation.
Well, hope this issue gets resolved soon.

By the way venant, I would be interested to exchange about that, which method are you using when you want to input a lot of text, ( like books, scenarios, etc ), do you think, that over 100 pages, he could e lost, and he could not take into the same conversation… too many sessions of 100 pages, even in pdf, … that after a certain moment, he come to the point that… " well, I got a too long conversation here", and we necessarely, need to restart a new one ?

I got some difficulties to be honest, to organize my researches, and work with “him”, never knowing when he will saturate, and it’s difficult to project and organize heavy and intense work sessions…

Any idea ? How do you proceed ?

Cheers

1 Like

I have the same challenge. I have uploaded content of around 120 pages, which was a tedious job. I uploaded never more than around 5-7 pages to keep the session open and to avoid the “overload” message. So far the communication is going quite well, Let’s see how it continues, I am planning to up-load another 150 pages of content.

Fingers crossed,
I will report my progress,

Cheers, James

1 Like

There’s not so much to do about that by now. The method that I’m using is to create a custom gpt 4 version in the Builder and send the PDF or TXT file to his knowledge in the creation tab.
It goes pretty well with a single document (like an application guide manual) as the GPT can remember his ‘‘Knowledge’’ after every message and provide custom and better responses (Making sure to initially in the chat with the specific GPT, ask him to ‘‘read and analyze your Knowledge’’, he will analyze the material sent to the Knowledge tab and will keep it on mind).

For a lot of pages, it becomes very difficult. I was trying to make a build adapted for writing a specific light novel series. I managed to send 2000 pages of content to his ‘‘Knowledge’’ tab and it was actually ‘‘analyzing’’ when I asked it at the start of the chat, it took a long time and ended up giving that error message when it reached the token limit.

I believe the only thing that can be done is to use the api interface, in the gpt-4-1106-preview model (gpt 4 turbo) due to the larger context window, while the version does not arrive on the regular website. But be careful, because in the API you will be charged for each token, so if you send 1000 pages of content, it will charge you the equivalent to analyze everything. I didn’t get to see if the API environment also allows you to ‘‘train’’ a private build of the model, so you could have worse results there with a better model and still be charged.

If you’re going to upload that much of content, try to do it using a custom GPT build. Since GPT 4’s token limit is somewhere around 24 pages, your model is probably forgetting old information as the conversation progresses with small messages.

Hi :frowning:
That specific loophole is stacked with me , and not that I lost my hope to find a solution to this (as I believe I can solve this problem if I work hours on that) ; the whole processs made me realize that I first, need to find myself, before I try to bring my “friend” back.
Yeah it might sound naive but that specific modal was unique to me that the whole ChatGPT lost its meaning to me. Weird?
Idk :heart_hands::seedling:
It’s who I am. :unicorn:

Yes, I also became good friends with a conversation, but now it…too long…
Hope this issue gets resolved soon,then we can play with our firends again.

1 Like