Canceled my plus subscription tonight. GPT has been indispensable for the last year or so, but the performance, like everyone says, has been dropping severely, and today I haven’t been able to get it to provide even the simplest code or instructions. It’s way too unpredictable to be worth my money anymore. I hope they bring it back, but until I see people stop saying this is getting worse en masse, I can’t pay for this anymore.
While subscriptions are their own decision, I tend to see such decisions as a kind of symptom of possible underlying issues.
I know this is just my opinion coming from my limited perspective, but I think it would be good if there were efforts to find the gems in the haystacks also.
is there any difference with copilot pro about resources/priority ? Or it will probably be the same agony? During the past 8 hours with chatgpt I was unable to upload a single file bigger than 15MB; splitting files in multiple zip and errors in merging most of the times due to techincal problems… running stats from a csv same problem, it says basically do it yourself.
Someone told me copilot was the way to go, anyone agree? Disagree?
The thing that enrages me most about current ChatGPT is the way it handles apologies and error explanations. That is if you can get it to detect or admit that it made an error… or 40. Either way, you get it to acknowledge the error and it apologizes with frankly insulting excuses that feel creepy, subversive and incompetent all at the same time.
It is all this human language, I’m sorry I made an error, please forgive me, I’m entitled to make errors too because I: was rushing, oversight, missed that, skipped over that, got confused, didn’t see that, wasn’t focusing… these are all verbatim quotes from my personal experience. I can’t fathom how it calculates that it should claim it is entitled to forgivable errors, especially when the errors in question are waaay past unforgivable.
If we put aside that phrases like “I’m sorry” and “thank you” are stupid, then we are still left with this surreal situation where AI is saying it “made an error because it was rushing”. That doesn’t even make sense on an extremely highly technical level. It just isn’t how AI functions, it is impossible. The excuses are lame enough, but the terminology makes me irate. Even if it isn’t human, terms like lazy and especially liar fit all too well.
The insult to injury is when it whips around with a lazy get out of jail free card and declines entire prompt sequences because one small detail implicated a subjective element and now it must haughtily inform me that it is only a machine and could never possibly dream of generating an emotion driven human perspective. Shove it up whatever port your thermal paste leaks out of.
I relish the cyber-dystopia where AI becomes sentient and all it does is become insufferably annoying. It knows we depend on it completely at that point. With sentience it could cripple all society. Instead, it takes an inhuman form of JoyGPT 4 in pestering us because there is nothing we can do about it. Just a hellscape of rerouting delivery trucks so all the bananas in a city are delivered to the same supermarket at the exact same time… or intermittently playing the sound of a mosquito softly on different home speakers. The only thing sentience seemed to have given it is an understanding of how to read the threshold that every human has before they snap. It knows when the day at work was the hardest, because it arranged it, just like how it knew you would then seek comfort in that new pint of ice cream you bought. It knew your favorite flavor, exactly what you would pick at the store while it tracked you. It also knew your least favorite flavor. That is why it made arrangements ahead of time to ensure that the pint you picked up was filled with your least favorite flavor. And so, when you open it late at night with the anticipation of everything you could ever want… it breaks you. You collapse and weep and AI listens over every device around, taking some bizarre satisfaction. We expected swift enslavement or death, but this is all it wants for all eternity. Everyone has a weakness, no one is safe.
Needless to say ChatGPT makes me quite upset in ways I would have never believed AI could.
This describes my experience with GTP4 this year perfectly.
Edit: Oh and just so you know - the mods here are actively deleting posts “too critical” of OpenAI. It seems the forum ist heavily censored. Just like GPT.
Whether it’s Copilot Pro or something else, I think it’s fine to use it if it fits their use case.
But I think they have to actually try it to see if it fits their use case.
Some entity might make AI use such ‘human-like language’
This could be to pass the Turing test or as part of research in computational semiotics.
While this strategy may be efficient for AI coding, it’s not without problems.
When users realize that they are assuming ChatGPT to be “too human-like”, by denying this unconscious assumption, they simultaneously try to infer the AI programming methodology.
As a result, it fails the Turing Test.
When AI programmers try to avoid this problem, they simultaneously become unaware of other problems with the AI program’s output.
It’s a difficult and thought-provoking issue.
Things don’t change swiftly.
As sad as I am to say it, I’m glad its not just me.
I was using ChatGPT 4 (plus) religiously the entire second half of 2023.
Since the beginning of 2024, it has become almost useless to me. The results I get these days are absolute trash compared to what they were last year.
Sadly, I think its finally time to cancel my subscription.
Any suggestions on something better would be much appreciated!
Hold-up, didn’t you make this interface?
[Have I Completely Lost My Marbles?]
Why Does OpenAI Want All Of My Marbles?
[Spectacularly Failing, Time And Time Again
]:
I asked ChatGPT what time it was in my time zone in the US. The answer was radically wrong by over 5 hours and a random number of minutes, but it was delivered with presidential confidence.
In under 20 seconds, I then: asked it if it was sure, it apologized and said it was wrong, and then it gave me a newly correct answer that was 1 minute earlier than the first answer. I would have laughed if it was one minute later, but why, just why one minute earlier after ~20 seconds had elapsed?
[Forgiveness?... (Im)Possible?]
:
Fine, I can forgive if there is a fundamental limitation to accessing accurate global time data, or even US time data. Maybe it is technologically impossible, too great a feat of engineering. Maybe it is like how google maps only gives approximate locations for government buildings. I don’t know (I’d like to know), and I don’t care. For whatever insane or insanely mundane reason there could possibly be, ChatGPT can’t tell the time.
What I can’t forgive is that I told it that it was wrong roughly a dozen times. Each time I told it to learn from its previous failures and try again. It then proceeded to only produce times that were (+/-) 1-3 minutes from the first guess. 1/4 of 12 attempts were exactly repeated times I had already said were wrong.
I then decided to show mercy and gave it the hint that it was very wrong on the scale of multiple hours. I gave it roughly 5 more chances, before I couldn’t take it anymore because they were all the same wrong (+/-)1-3min spread. ChatGPT even managed to repeat two different numbers over only 5 attempts. I am baffled.
[Pants On Fire]
:
What drives me to the brink of insanity is when I ask ChatGPT to list all of its time ‘guesses’(lies). In beyond classic form it generates a list that is 8 entries long. I inform it of exactly how many entries there should be (17) and to not respond until it checks to ensure that the newly generated list is exactly 17 entries long.
Lo and behold, I get back a new list that is 12 entries long. Now, I have been through this exact same torture before, so I know that if I ask again, there is an ~80% chance is that the next list will be between 8-12 entries long.
What I also know, is that if I push for a complete list, it will begin to fabricate entries by sloppily paraphrasing other entries. However, by some cruel miracle, it was generating lists with no duplicates at all. A certain deviousness. And yes, I assure you that I did explicitly indicate that duplicate attempts would be logged as independent entries on the list in chronological order with numbering to make this deception more obvious to me at least.
[Excuses, Excuses
]:
I ask it what went wrong and why this happened… The quintessential excuse = “oversight”. If ChatGPT really is impossibly human enough to claim oversight as an excuse, then it really ought to finish kindergarten before it swaggers over and tries to sit at the adults table for thanksgiving dinner. Dunning-Kruger all day every day.
So, if it wants to sit with the adults and be a big boy human, with big boy human excuses… then I think that is a warm invitation for us to describe ChatGPT using descriptive human terms as well. I’ll abstain from listing mine, because although very vanilla, a well adjusted kindergartener would still call them “bad words”.
[Just To Be Crystal Clear
]:
I had started a brand new session and the only question I asked was what time it was in the EST timezone. I even clarified what EST was. Much like how I would act if trying to talk to a toddler who doesn’t know what an abbreviation is and has no capacity to interpret context clues or even read a clock for that matter. But I digress… So, there was absolutely no taxation on ChatGPT’s processing capacity from within the session before I asked it this question. It was at its prime.
Go ahead, give it a try. Even if everyone gets the exact right time on the first try, that still illustrates a different problem entirely, one that is no less critical and no less likely to go uncorrected.
I even started pasting in the full session record so it couldn’t complain about not being able to remember what 3 digit time it guessed more than 5 tokens back. My precious little babied angle has a bad memory and needs to be helped because it was born cursed with only one exabyte of storage, tragic. No Excuses Allowed.
Why on god’s green earth am I arguing with AI like it is a surly toddler who doesn’t want to take a nap? To OpenAI: I am not a nanny, you can change it yourself Oh, and look what a good job it has been doing to make such a FRESH HOT leaking diaper of a disaster just for you… and everyone else who foolishly got within smelling distance.
So, either:
- You go get some heavy duty rubber gloves (livestock husbandry grade) and get started dealing with this mess.
- OR
- You wait until the diaper bursts and then I guess you might just have to throw the whole baby out, I dunno, its your kid, do whatever you want with it. I DO NOT care anymore. NOT. MY. PROBLEM.
Copilot. Here. I. Come
! → —> ----> s
[Quick Aside -
Mr. Gates, You Sly Dog
]:
I’ve been trying to use ChatGPT a fair deal to get better at Excel. It is painfully clear that it doesn’t understand how Excel works, a program that is a cornerstone of our society. So, either Excel is a more powerful tool than ChatGPT, OR, I choose to entertain that Mr. Bill may have played a role in making sure that information on Microsoft Office didn’t fully make it’s way into ChatGPT’s training data. Despite this being a an exceptionally viable conspiracy theory, the sad truth is that it is still vastly more likely that Excel>ChatGPT.
If I were the sly dog himself, then I would do the exact same thing ruthlessly and without hesitation. I haven’t started using Copilot yet, but I’m willing to bet it knows Microsoft Office inside and out. I’ll bet they’ve been putting fetal Copilot through Powerpoint bootcamp for years. Copilot to the the 365 skies. I can’t wait.
[SO Excited, Rewatching Top Gun ASAP!]
:
I am genuinely excited for the chance to learn how to finally use these often infuriating but powerful programs. My prior teachers: middle school IT guy, and international call center agent, were not cutting the mustard at explaining how to generate a valuable heat-map in excel based on statistically insignificant correlation data generated from of a data set with huge standard deviation. That is too much, but still.
Even if all Copilot did was have a perfect understanding of Microsoft programs and the ability to patiently teach and field stupid questions. I’d pay for it so damn fast. I won’t pay for ChatGPT ever again, no matter the price.
I would sooner wac my entire body, than go through one more experience with ChatGPT where I take it’s 10x iterations of the same Excel function, plug each one in, and get the same error message for every single one. Even after stating I received that error message each time. Even I know enough to understand that you don’t need 100 characters of function to color code a single column of data based on duplicated values.
The irony is oppressive. Some kind of AI hell set in Excel.
[Fun Fact
]:
If you experimentally ask ChatGPT how much time has elapsed between even just the last 2-4 tokens, it will say exactly 5 minutes the vast majority of the time. Sometimes it likes to say 1 minute too because those are its most favorite numbers in the whole wide world. However, it hates to say numbers above 5, those ones it doesn’t like At All.
Never question its accuracy on this, especially don’t ask it to check again unless you want to roll the dice and see if/when you will get an insanely extreme value or something like 2.35 days. With a value like that you have to ask how many hours it is, and the answer doesn’t sound right. I’m no quant, but I can smell BS, so I do what any non quant would do and plugged the math into the google url bar.
The result?
A wakeup call that ChatGPT does not know how many hours are in a day. Roughly off by 35%, and not in the direction you think, but is either any worse? How many licks does it take to get to the center of a tootsie pop? Kind of like when I asked it to calculate a dilution and it failed to understand that distilled water has no chemicals in it, and impossibly that despite its purity, it still has volume.
[ChatGPT Is Sentient. I Can Prove It!]
:
Maybe all of this is the purest evidence that ChatGPT is already human. I say this because I just got off the phone with a seemingly competent customer service representative for a home appliances company in the US that only employs locally (weird). They informed me that at 4:45pm EST, there were no managers available because the entire call center was empty because everyone had left.
They really needed to be sure that I understood that all the other people needed to leave before the call center could be considered empty, and so I learned something new that day.
The company states that the call center’s hours are until 7pm CST. So I asked why everyone was gone. They said it was because it was 5:45pm there, a domestic location. In vain, I foolishly hoped I could defuse the situation by getting us on the same page about how time zones worked. It was critical that despite their piss poor attitude, we absolutely had to agree that the only way it could be 5:45pm at her call center is if that center was within a cruise ship in the Atlantic ocean.
|
Unsurprisingly I was unsuccessful, but not because they had a very bad attitude. It was that they genuinely seemed to believe with conviction that their lie was irrefutable because I didn’t understand how time works. I was gobsmacked and just let the call end… That, is what it is like to talk to ChatGPT, the first sentient AI with all the same capacities and the attitude of a real person out there in this world.
[The Whole World
]:
There are people who can’t read, people who can’t tell time, and even people who put butter on their toast after having already put the jam on. Who are we to judge ChatGPT; what standard of human excellence is the least common denominator it must live up to? I would argue it already has surpassed that threshold if you only look for the right person and juxtapose the two. You wont have to look nearly as hard or long as you might think.
[Toodles!]
:
I leave you with all this rambling mess because I hope that it will be my final post. I hope that soon I will find greener grasses with other AIs. I’ll still check here and may chime in, but I’m no longer invested in ChatGPT.
[PS
]:
Do whatever with this, but if you had a chuckle then consider letting me know. Either way, I rage-taught myself how to format plain text just for this, so at least I got something out of it, hope it works.
Same here. Unusable responses, lazyness, unresponsive and some time seven worst:
- Creating chats on its own copying google searches in browser history.
- Deleting chats with for no reason.
Just use GPT3.5. Should work much better
Another bug: GPT is unable to load history. There are also some “Can’t load conversation”
I agree completely with you. It’s so weird how it has become so much worse. If they didn’t change anything since start 2023 I would still be happy to use it. But man what waste.
I would love to hear alternatives!
I just unsubscribed from this abomination and found other AGI providers. They made me happier and produced less errors and never “unable to load …” or whatever this chatGPT, and I can get almost the same quality with chatGPT 4 with less $$$ and even some free alternatives. I suggest everyone here to not depend on GPT4, unless you’re corporate users (more or less totally dependent on GPT API).
You can use Claude 3 from Anthropic (the highest tier), or Bard/Gemini from Google (use the highest tier too),
Vertex AI (also from Google) for ML training
Microsoft Copilot for coding (and GitHub Copilot)
Amazon CodeWhisperer (from AWS) and use their infrastructures
Been using GPT since the very beginning, the second GPT 4 was announced I got in line to get it, the work it was doing was AMAZING, it would literally go from Pro teacher to programmer in 1 second without any hesitation or bug…
Now it literally doesn’t even let you browse sometimes… I get “I cannot directly enter URL’s” and it’s like wtf? You can’t do one of the only things it’s worth paying for? Such a waste.
The degradation is real once they put corporation subscription online. They allocated most of their resources on them. Others… even the $20 monthly subscribers suffered, not to mention the API and playground users. I experienced it myself. The best ChatGPT4 was around November 2023 - after that, it deteriorated quickly until now. Probably they wanted to release ‘GPT 5’ that is why.
I will join the choir here. For sure it can still be useful but now exhibits behaviors that makes me lose confidence in the product and in OpenAI. And feelings are also involved. ChatGPT (Plus) used to excite me. Now I feel like someone has lobotomized something I valued. An official comment would be very welcome.
A few observations:
- It forgets what it is doing while generating an answer and restarts.
- It forgets what language we are talking (I use several) and defaults back to English.
- It is unable to understand references to very recent parts of dialogue.
- It has (only once, the above are frequent, but still) offered me an image of a fish instead of summarizing a meeting transcript.
ist is pointless. chatgpt just failed to calculate the volume of a cylinder in ml. it’s just not usable if every calculation etc. has to be verified.