what’s happening? I’ve been noticing these last few days that the computational capacity of ChatGPT has decreased a lot. In addition to errors and connection difficulties, I notice that it is dumber. Making a lot of mistakes, is without precision and often making me spend time with simple things. Will this reduction be permanent for free users?
I see. And you can be sure that I take your words into consideration. But sincerely, I do not believe that this is the case, at least not in the sense that I am pointing out.
I understand issues of stress of use and access, as well as it is perfectly acceptable to have issues with the waiting list. In fact, I even think it’s important to ensure that it works the best way and to control access as it develops. However, the “intelligence” of ChatGPT appears to have decreased a lot, along with its depth of knowledge. And I can’t see how this is something interesting for its development. In my opinion, it ends up not allowing the testing of its real performance, and, in addition to what you mentioned, it ends up generating many unwanted results and errors, which do not necessarily contribute reliably to the context. Besides, the fact of overloading, in analysis, those involved.
I do not believe that it is a question of punishment, far from it. I believe that it could be a strategy to manage this problem. And that’s why I’m interested and questioning. What’s happening with the reach and “intelligence” of ChatGPT? Will it return to its previous potential for the public for free?
There is no doubt it has dumbed down, the output are faster and the precision is worse.
The worst part is the degrading memory within an ongoing conversation, it seems like it loses context after 2-3 prompts.
This is unacceptable, we are paying for the real unrestricted GTP4 @OpenAI