I totally agree with this sentiment, and even after trying to tune and give very precise instructions, both directly and in chat, it still continues to just give a false answer, or completely miss the context of the conversation that is still within its context limit.
It’s a compulsive liar where I wish it would just say something like, I’m unable to provide an accurate response on that request, perhaps then prompting the user for more specificity. Or even just a warning saying, my answer has been constructed using statistical probabilities and not actually correct information, and the logic of deductions has not been validated, rather than assuring us that it will not do that again and follow our instructions, yet repeat the same issues.
Because no matter what I do, even when I tell it to do those things, or fact-check before it replies, then it still continues to be completely oblivious. This goes for both the ChatGPT chat system as well as my own system assistant models via api or playground. I have resorted to now using my own self hosted local models, which have their limitations too, but are functionally superior. They aren’t as general in breadth, requiring loading of particular models for a given task, and also running on a local system has its constraints too.
But whatever has happened… I assume it’s the attempt to satisfy the user in always giving a response and seeming to ‘work’, whilst modulated by adhering to an increasingly growing list of policies that interfere such as for censorship, have played a role in this.
But it’s just acting like a compulsive narcissistic liar. I have no faith in it anymore. It’s a shame, really. Maybe that’s part of the reason one of the lead directors has left the company.
That’s true. From day to day is more stupid than before… I am really start to be scared about answers when I give it him more complicated questions and tasks to do. In the beginning I had big trust for it…
In the same time when starting giving stupid answers it start made many mistakes in simple calculations, analysis and tasks. It just no helpful like before. I realized it will be my my last paid subscription until they fix it.
This is what happens when they open the servers to non paying customers and they get overloaded so they produce stupider GPTs every day to be “faster”. Garbage.
Yeah, felt it becoming dumber yesterday. I ask it prompts that create tables of answers to investment related questions using publicly available data. It either complains I am asking for too much data and offers me a python script (I don’t code, I don’t want to copy/paste into Jupyter notebook and debug), or offers to split the prompt into batches of 50 - where it returns maybe 18-30 results, not 50. Maybe the progression of AI isn’t AI becoming super intelligent, but doing as humans do in bad customer service and learning to provide the least of the minimum to get by. Still worth the $20/month to me, personally, as I use it to prompt investment research answers to questions found in the CFA curriculum. I am creating my own, custom index and already answered enough questions to concatenate entire corporate profiles, industry reports and demand/supply analysis reports. So, yeah generated 700+ stock reports to describe a company in relative detail before my human intervention is worth way more than $20/m to me. Still, I can “feel” and see it become less and less detailed, required better prompts. Remember folks, AI is just people writing code that writes code. It’s not yet “autonomous”, but still the product of biological intelligence, not 100% artificial, yet. So we may be dealing with OpenAI programming it slowly to chill. I have a theory, when a new tech this powerful is released to the public, the assumption is that the best and brightest will use it - limiting the the pool as entrants. Yet, people such as myself (Uber Food Delivery Driver)(working class) who can’t land higher pay interviews or aren’t POC enough for DEI opportunities enter the pool of users and the Tech Gods who hail from rich families and money become astonished someone that delivers their food is using AI to outperform the SP500 becomes intimidated - which leads to dumbing down of the tech. Anyone here familiar with “Quantopia” ? They allowed automated stock trading before shutting it down - back when only RobinHood was commission free. Now it’s become an API. In the future, the working class will be building junk spaceships to get to Mars, too, and they’ll shut that down lol.
Yeah, I agree. I’m happy I’m not the only one, and after Googling this I see there are so many responses.
It used to be a good helper when writing code, now it seems as time goes on it’s getting more and more stupid, it was able to pretty much almost code by itself, now it can’t even do a simple Prisma / SQL query. I’m shifting towards other AI models because, unless it’s a very basic task, GPT-4 seems like it has lost all of its intelligence and became like GPT-3.
Edit: and not even talking about how stupidly verbose it has become, to the point you have to think about every way to just make it shut up and reply with what’s actually needed. “Write a function using library x, I have it already installed, I know how to use it …” and still it starts repeating the whole question, starts making a bullet list, says make sure you have the following packages installed, etc…
I think it can be partially explained by rose-tinted memories.
For those who believe it was smarter in the past, I suggest looking back at old chat logs if you have them.
because people have been training it - it’s as good as the input it keeps getting. Add the various political restrictions and even more disturbing, the political correctness imposed upon it. And there you go, you have your answer.
Same. It’s horrible. It was so good at first. So much money is being put in AI and it’s gotten worse. Isn’t it supposed to be the other way around. Sad. I am going to cancel my premium membership because I’m not getting any value from it anymore and don’t trust it.
The Chat GPT chat is getting dumber not only because it has more and more access to layers of unverified knowledge.
It also lacks an effective method for classifying and verifying what it learns, but I feel that this is now the least of OpenAI’s problems.
The main reason for GPT’s stupor is problems with misallocated computing power at runtime that no one has control over due to poor optimization and network infrastructure problems.
Overloaded servers and daily crashes with a product you pay for is not the least of the problems - the worst problem is probably the lack of real plans to develop the infrastructure or solve the problems.
degraded public availability for sure. those who believe this company will offere the best product to the public is delusional, they are for profit and advantage is a thing.
I just made my own “game” and asked GPT to make a tech tree. I told GPT to fix the tech tree more than 10 times but it never fixed anything and printed the same results over and over again. It’s creating and mentioning imaginary things that never even existed in my game, And never fixed the main problem of the tech tree: Math problems. A level 2 technology is supposed to have a prerequisite of a level 1 technology. At first, GPT nailed it. But it soon stopped solving problems and started adding problems. GPT said it fixed all problems, But the issue persisted more than 10 times. I had to stop ChatGPT from generating more of this waste. GPT can’t even remember ~20 messages back. It’s very infuriating.
Update: ChatGPT cannot remember 5 messages ago… Why?! It creates Imaginary things that never existed and called it existent. Here’s an example:
User: [Message 1]
GPT: [Response 1]
User: What is message 1?
GPT: It is: [Something that GPT made-up]
User: Wrong, It is [Message 1]
GPT: I apologize for the confusion, Message 1 is: [Another made-up thing]
I absolutely agree. It’s been getting so unbelievably terrible, that I’ve decided to look up if this is a thing for anyone else. Until about 2 weeks ago, it used to respond in such a strangely intelligent way. Now, it feels like I’m back on the very first version of GPT. It lost its nuanced responses, and I have to ask it the same thing in 3 different ways for it to actually respond to my bloody question. Instead of saving me time, it’s become a big time waster, where, for the first time in a long time, I’d rather sift through google searches again. I absolutely do not have the patience to watch overly long videos with useless commentary, so it’s books and research papers for now. It’s a bit sad, I used to like not having to use bad writing, but here we are. It’s become useless. It feels like I’m getting absolutely nothing for my subscription! It used to be so good, what did they do?!
Oh boy, I think LLM’s are a dead-end. They make way too many mistakes, they are not smart at all. They are always just trying to guess things. Bc they have a power to guess things often times, this gives people the impression that they are smart. They are not smart at all. When they make mistakes, which is not unfrequently, they can flip flop if contradicted, but that just shows how dumb they are. They stress me out a lot. Trying to use them to fix bugs is not good, since you’re adding noise to the process of debugging. What if it gives you false leads and tips? They are good to refresh your memory though, about details you can’t remember.