How to deal with "lazy" GPT 4

I’m an engineer an I’ve worked on Apple products since I owned an Apple 2C, a 2E, and even eventually an Apple 2GS. My first computer was a Commodore VIC-20 and my first PC was a TRS-80.

There is one flaw in your point that not only you, but MANY people who "switched to apple after getting tired of crappy… [insert PC, Phones, etc. here], have made, and it’s a very terminal flaw that I’m going to counter with an important point not one single person whose ever made that argument has ever been able to speak against.

If you buy a $300 laptop or PC and compare it to a $1600 laptop or PC, that says more about you than the laptops, because those comparisons are completely irrelevant and uncomparable.

If you buy a $100 Android phone and you compare it to a $1400 iPhone, that says more about your consumer intelligence than it does Apple’s products.

Spend the same money and compare the results!

I remember one day I was talking to a friend of mine who worked at Fry’s while I was looking at the new ROG laptop that had just come out. We’re talking about it and he keeps trying to show me this MacBook Pro… I think I lost every bit of respect I ever had for him that day.

Spec Comparison was something like this: (keep in mind this conversation was back in like 2007, but it’s still relevant to this day)
The ASUS ROG was a 17.3" 1080P screen (that’s the best there was back then), it had 2 640GB hard drives that were configured in raid zero, like 8 gb of ram back when that was a huge amount of ram, a Radeon 4870 GPU with some solid amount (I think it was like 2GB) of dedicated video ram, back when that was the newest and best card on the planet, a fantastic full keyboard with number pad, and was just a beautiful laptop. (I bought this laptop, used it for over a decade, then gave it to a family member’s kid who still has it) It came with a 3-year warranty and the first year even covered accidental breakage like if you dropped it.
VS.
This 15" MacBook Pro that I don’t remember if it was a 1080P screen or not, it had a 9400 GT video card which was complete garbage with 512mb dedicated video ram, 4gb ram, and like a 500gb hard drive. It had a generic 1 year warranty that didn’t cover much unless you pay for the apple care warranty.

The MacBook Pro was over $100 more!

For that $100 I could have walked over, bought a copy of (I think it was) Snow Leopard, downloaded the hackintosh firmware (since at the time just about all Macs had Asus boards), and dual booted Mac OS and Windows at the same time, giving me literally EVERYTHING you get from that Mac, but multiple times better, and everything from Windows, with hardware that was so far out of its league that you couldn’t compare it in any way.

He looks at me and says “Yeah, but you wouldn’t have this” and waved his hand across the Apple logo like he was Vanna White… I was never so ashamed in my entire life.

If you invest the same money into a quality PC that you spend on a Mac, the results are not remotely comparable. My current ASUS ROG Strix Laptop would destroy any MacBook on Earth and it didn’t cost me more than I would expect to spend on an even tolerable MacBook. You will never come anywhere near cost-performance with a Mac than you can achieve with a PC.

So you can’t build a DIY kit where you got the cheap motherboard free with your CPU or buy some Gateway or eMachines grade PC and be like “I’m tired of these cheap PCs, so I’ll pay more for a quality Mac”. If you can afford a quality Mac, why didn’t you buy quality PCs? Because if I took a single PC in my home, whether I built it or I bought it, and I replaced it with the same price point Mac product, my specs would fall through the toilet and I would instantly feel robbed.

My 11-year-old is using my old old old laptop, my 16-year-old is using my old old laptop and my old laptop is still sitting in my office right now acting as a server. There are 4 generations of amazing laptops in my home that served me well for years and have been passed down and used as an excuse for me to upgrade to always have a high-end laptop to work with. The laptop my son is using is close to his age and it still works better than any $300 laptop you’ll find at Walmart. I think his PC is my old i7 Gen4, and it’s never had a single day of downtime since I gave it to him, nor did it when I used it. I still have 1st gen i7 Systems that work perfectly fine and the only reason they’re sitting on a shelf in my office is because I have no use for them and haven’t found someone to give them to that needs them.

I’ve used iPhones as a few companies I’ve worked for required them. I was an engineer at Comcast when they switched over to iPhones for everyone and all I can say was holy crap, that was the most annoying downgrade ever that screwed up my productivity in a company that already doesn’t believe in treating their employees like humans, so it drove me insane. I have never used an iPhone that was generationally on par with my Samsung Galaxy Note at the time. They’re more annoying to use in every way, so restrictive it’s infuriating, and they don’t hold up. I’m still getting updates to this day on my Samsung Galaxy Note 10 Plus with 1TB storage (512/512) that’s pretty old, and no generationally equivalent iPhone is anywhere near it in capability or quality, hence why I’m still using it because I’ve never felt the need to upgrade it. I paid WAY less for my phone than I could have come close to getting an iPhone for, but retail they are comparable. Feature-wise, they aren’t even close.

When you take all this into consideration and add in things like planned obsolescence or the inability to side load apps or how much of a pain in the absolute @$$ they are to deal with as a company if you aren’t selling something they can make a cut of on their app store, or how restrictive and limited their desktops are with software, gaming, and how low spec they are to PC equivalents, I’m sorry, but the only logical explanation I can come up with to buy an apple product is that you were sold, whether from someone else or yourself.

I have a Hackintosh I use solely for the purpose of keeping up with their software and learning so I’m not restricted with the clients I deal with, but hell would freeze over before I would shell out the money for one of their computers.

Back in like 2006, my girlfriend at the time had bought me an engraved iPod 30GB for like $300, which I mostly used as an MP3 player for my car and had no issues syncing with my Windows XP Computer. She had a MacBook that she had before we had started dating. She decided she wanted to go and get herself one of the iPods too, and so she did. After spending $300 on the iPod, she found out that she needed a newer version of iTunes to be able to sync with it, and in order to download that version of iTunes, she had to pay something like $200 for an update to her OS… so my Windows XP, that was OLDER than her MacBook had no problems downloading and running the latest iTunes, but her MacBook by the company that makes iTunes needed to pay a $200 ransom to be able to update just to use the software that is required to use her $300 iPod?

I’m sorry, but there is no real logic and reasoning on earth behind buying this company’s products unless you work for some corporation that has been sold into some sort of partnership with them and you have no other choice, or you somehow make money specifically doing business through that ecosystem.

If you want to compare Macs to PCs, you gotta do it fair, do it dollar for dollar, spec for spec, and see what you get for your money. When I used to work as a graphical artist at a screen printing company, the company owner there was into Macs too. When I started doing art for the embroidery machine and we got a new photo printer that could print multicolor right onto a shirt directly, we ended up needing a new system that would work with the new software. He had like a small business with maybe 5 computers on the network and to add one more changed his CAL limit, so he had to pay Apple like $10,000 for the change to his server license. If Microsoft tried to charge Apple prices for CALs, they’d have gone bankrupt on enterprise licensing.

That’s why when I see Apple working out deals with schools to use Macs and teach on Macs, I look at it like a drug dealer telling you the first one is free. Once they convince you that’s what you need, they’ll bleed you out for the rest of your life and somehow convince you that it’s a good deal.

If you want security, get Linux, if you want versatility, use Windows, if you want to pay many times more than Windows for a polished KDE Linux, ask yourself why before you make that choice, because literally everything you can do with a Macintosh computer can be done better for less money with good consumerism outside of looking trendy at Starbucks and literally embodying my original statement of “I have more money than brains”.

1 Like

I’ve read through as many comments as possible of this thread… and my interpretation of GPT4+ is not “lazy” but “broken”.

I’m using the web browser interface (Chrome usually) and it is NOT working properly. I can’t even make a Custom GPT now, as there is no SAVE button and I have to have a battle words of wit with the now badly working custom assistant, who is trying to help but isn’t.

Definitely, there is a major reduction in the ability of gpt-4 compared to last year. It’s not able to consistently respond even with very direct instructions… 2 replies later, it slips back into the previous state of verboseness or ignoring session instructions. It’s just plain painful now.

The only insight I could add for my experiences, was that it did seem to go through various stages of better or worse output. I was starting to think that the performance overall is down, but at peak times in the US and the server is probably flooded, my different time zone experience seems to be much worse than at other times? Could it be that the time-slicing they’re forcing on the server is not allowing enough LLM time to resolve a suitable answer? I suggest this also due to the 1-word-per-second output at times?

I do agree that OpenAI has opened up an ecosystem unlike anything offered by anyone else, but they are very restrictive with their best and newest products that tend to go exclusively to wealthy clients, not their general community. Like as a small team leader, when I’ve inquired about access to non-public services, I’ve never even gotten so much as a piss-off out of them. When my team applied for enterprise, we didn’t even get the automated spam telling us some canned crap like I got from Claude, you just get nothing from them.

Nothing speaks quite so loudly to a company when you request to give them your money and their lack of reply rather loudly reminds you that you don’t even have enough money for them to waste the time to tell you they don’t want it.

As a whole, I haven’t seen better than ChatGPT/OpenAI, but I’d go where the best resources and opportunities for what I do exist, first and foremost is the service I’m getting, I don’t care enough about any company to care what name is on the product. It’s not like if I go use some other LLM for a while and things switch around so I come back, ChatGPT is not going to be like “You cheated on me with another model so I’m not taking you back!”

I’ve run my own open-source LLM at home. I’m too cheap to invest the money into the GPUs to make it viable though. You get an idea for how much privilege we have with ChatGPT when you submit a prompt, then you can go make a sandwich and come back to see the response. One GPU good enough to make a strong open-source LLM responsive is more than I’ll pay for ChatGPT in the next several years and one GPU isn’t really that much unless you’re springing for the top of the top-of-the-line, which you’re paying big bucks for.

The LLMs I’ve built, trained, and worked with from open-source were only to get the experience. I can’t build anything that would touch ChatGPT-4 for a price I’d consider paying even for personal use, and I don’t have the kind of money they lose to throw into something like that.

They’re fun to learn and experiment with, but you need serious cash if you want to do anything serious with them.

I’d personally be happy just to be able to pay to use ChatGPT Enterprise without needing a rich corporate overlord. Even knowing I’d be paying a lot more, I’d be okay with it because I also know the difference in results.

Usage certainly impacts available resources and I’m not sure how dynamic their controls really are, but I usually use it late at night. I’m a night owl/vampire whenever possible. Like it’s 7 am for me and I’m about to go to bed. I will say it does seem worse during peak times, which makes sense.

But clearly after that outage a few days back they made some changes and those changes aren’t good. “Data analysis” my @$$, ChatGPT-4 can’t even debug an undeclared variable it created in 50 lines of code, so there’s ZERO doubt it’s being throttled into stupidity right now.

My typical MO is when the service turns into such dog sh** that I feel compelled to be abusive to it and bully it into compliance, it’s time to get off ChatGPT and come here to complain hoping that enough people are also complaining that someone at OpenAI will pretend to care and suggest they went too far.

I have ZERO doubt this is just the result of throttling and it’s based on their usage, current product distribution, and available resources. They’ll likely add some new hardware or maybe just wait until enough people get pissed from quality degradation that there are fewer people using it and they feel safe to open it up again.

You’re right though, it’s broken. Not because they’ve updated to broken or defective code though, just because they’re not allowing us to use enough resources for it to work well. For simple tasks that don’t require a lot of work, most people who use it for stuff like writing won’t usually notice or have it get bad enough to complain. If you use it for more demanding crunching, like code though, it’s difficult to justify using it in this present condition. I’d rather complain on here and hope someone at OpenAI notices or cares (no matter how unlikely that is) than spend my time getting belligerent with ChatGPT about how stupid every output has become just to blow off steam from frustration.

Also, it could possibly be that because I mostly use it at night after normal business hours, which may impact why I almost never see rate limits. I’ve sat and used ChatGPT-4 Plus for a solid 12 hours without getting a rate limit. I doubt I even see the rate limit message more than 1 or 2 times in a month on average, even with steady usage.

I agree with @me.mark Perhaps if we make enough noise in the forum someone from OAI will at least have some incentive to get up of their collective asses and do something to improve user experience. I mean I do understand that the users in this forum and the posters in this thread are devs or at least what would typically be called “power users” - so we’re the ones who push at what the LLM can do, in all likelihood providing the best data for training and improving the model - so keeping us happy or in the very least content to continue using their model is really in their best interest. Come to think of it, I’d say that the training data users like us provide is far more valuable to OAI than the subscription fees we pay - dev’s and power users are a small fraction of all the users of OAI so while the financial leverage may be small the benefits are great (to OAI). In all likelihood, the vast majority of users are satisfied with what GPT4 can spit out - some email or cover letter and such… but this type of data is not as valuable even if its more voluminous… unlike the data and other type of input dev’s and power users can, and do, provide.

1 Like

I also unsubscribed. It was driving me crazy, it went from increasing my hourly rate to making me lose an incredible amount of time. I used it every day, and now i use Copilot.

Copilot was horrible at the beginning, with really bad ergonomics and user experience. But now the web version is not bad, and you can feel the progression over time, especially when you know the difficulties Microsoft has in providing UX/UI friendly interfaces.

As for CHATGPT Plus, I have felt a surprising regression since last November. I feel that GPT-4 is better at Microsoft now.

PS: I never would have believed at the beginning that I would end up insulting GPT-4 in every possible way, I knew it was time to move on at that moment, haha. I never would have thought i’d use a Microsoft service either

3 Likes

I wholly agree, that’s why I’m here. This last change is quite excessive.
Whatever has happened to convince them this is acceptable was wrong and I am sure they’ll lose many subscriptions over this.
I’m probably guilty of just being complacent and stop using the model to complain, but maybe I need to cancel as well. It’s not like it’s viable for me as it is. I’ll just end up pushing the moderation boundaries raging at it over the stupidity of the outputs.

1 Like

Seems to me they dialed back GPT4 to GPT 3.75 over the last year.
When the new GPT5 comes out I expect it to be far better - but then get throttled back down to just better than the original release GPT4 - but they’ll probably introduce some native agents as the shiny object to keep people happy.

It was last October when it got so bad that I was posting about how much code it was abbreviating, my post was something like “Prompts to stop abbreviations in code”. I’d say by late November though all that was mostly gone. By January I actually broke barriers I’d never broken before including once getting it to output around 400 lines of code which I was extremely impressed at, though I had to click continue generating, then tell it to continue, then click continue generating again.

From sometime around January between Data Analysis and Customization, with whatever improvement they made after that, I can honestly say I didn’t have any complaints. In February, I started picking up some extra work because it was really helping me with my productivity and I was flying the happy train well enough that I felt like I was wasting my time attempting to use Gemini Advance when it launched because I had work to do.

Then in this last week, whatever they did, the day the continue generating button was removed, the model turned into garbage. It got like GPT-3, maybe 3.5 level stupid all of a sudden. It’s like it had instantly forgotten how to code.

That post I made, by the end of it, I largely felt the issue had resolved itself. There was an update and all seemed fairly well. Plus now some of the new tricks I was using in my prompting were working. One of my sort of copy pasta prompt endings became:

YOUR TASK: Make the above requested MODIFICATIONS, then output the ENTIRE unredacted code with no omissions, abbreviations, or summarizations, and without removing ANY code or functionality unless you were specifically asked to do so, regardless of whether or not you believe I already have the requested code.

I would have a section above that called MODIFICATIONS: which was a numbered list of changes, above that was an unnumbered list of PROBLEMS: or what I wanted it to stop doing, and above that was the code.

When they added prompt editing and I could refine my prompt based on the output I got, I was legit pretty excited.

This system worked well for months up until this last week…

Now this thing is so stupid it couldn’t cope with an undeclared variable it created. It started putting made-up code in the outputs and kept shrinking the output repeatedly.

When I made my post in October, I was struggling to get it to break the 180-200 lines of code barrier before it would totally choke out and the context memory was awful. After GPT-4-Turbo (128k) launched and ChatGPT-4 Plus got a big context upgrade, I’ll admit I was happy. That context memory update was a major improvement.

It almost felt like with all the crap talking about how Gemini Ultra would be this big ChatGPT killer, maybe OpenAI took the leash off a little bit to make sure ChatGPT was performing well. Then, once Gemini Advance dropped and it was all talk no delivery, AND Google got exposed for all their moderation racism, it seems like OpenAI might not feel so threatened anymore. Though then Claude 3 dropping and people saying Opus actually is outputting code better than GPT-4 should have made them wake up a little.

As of yesterday, I can’t do it, ChatGPT became as my kids would say “sped”.
It feels like historically, I could look at my chat history as a graph with ChatGPT-4 and you could see the waves of ups and downs with GPT-4 and measure it by the number of swear words in my prompts. Like the more times, I use the phrase “F*****G STUPID!” in a particular chat, that’s how bad the model is acting at the time. Because once it starts acting so stupid there’s no chance for it to help my productivity, I have no reason to care anymore and would rather insult it belligerently than pretend it is doing anything useful.

I wonder how many other people get like that? :thinking:

1 Like

Same issue, I actually can ask it as simple as :
Use these three names. Put them in this order and display it this way.
It will near every time miss the word which I though it could not do.
I find it ignores script, even if I define which lines and what condition.
The problem is wrong or poorly executed answers go against your usage.
Which is annoying.

1 Like

Sorry to see this happen to you buddy. Hold tight, OpenAI always correct the ship. We can’t expect their phenomenal pace of evolution without the odd snag. GPT4 will be back; even better than its old self in no time at all.

1 Like

I can hardly generate a complete code using gpt4 now. The code is slightly too complex, and it will deliberately ignore it. It will use comments to tell you, “You can write the code you need to implement here.” Such an answer is meaningless, and I have considered canceling my subscription. :frowning_face:

3 Likes

Yes I have noticed the odd and annoying sudden return of comments like “-- Add logic here” or “-- Add desired features here”. That’s the kind of stuff that gets under my skin.

The only thing I’ve found that helps when it starts doing that is make your request smaller if possible. Like provide the relevant logic and just ask it to make one function, rather than asking it to output something larger.

That trigger is basically telling you “I don’t want to put that much work into addressing your prompt”.

1 Like

Agreed. As another member of the esteemed "My First PC was a TRS-80 " Club, this too has been my biggest gripe with GPT-4. I use the API primarily for text analysis (Q&A stuff), and it is still the best of all the models I’ve tried so far in that area.

However, with respect to ChatGPT as a code assistant, I began my AI journey with it and it is still the first place I turn to solve difficult issues. But, increasingly, rather than argue with it to give me complete answers, I also now flip over to Claude-3 or even Gemini Pro.

2 Likes

My friends, our concerns are not unfounded. I assure you, my presence here is a testament to the considerable effort i’ve exerted, the self-reflection i’ve embraced, and the exhaustive measures i’ve employed. Regrettably, the service has deteriorated in quality for my purposes, prompting my decision to discontinue my subscription.

Over the past half-year, i’ve meticulously refined prompts for tasks of notable complexity—tasks which GPT-4 from ChatGPT once executed with exceptional brilliance and to my complete satisfaction. It used to surprise and delight me consistently.

However, in recent months, achieving the same caliber of results has become a battle. Previously, a single, finely-tuned prompt sufficed. Now, even coding requires painstaking precision. The sense of having a reliable, skilled assistant has faded, replaced by the feeling of guiding an intern still learning the ropes.

5 Likes

In a lot of cases I’ve found it unusable. Forget about browsing and summarizing, its awful!

2 Likes

:stuck_out_tongue_winking_eye: Oh dear :man_facepalming:… I don’t know why I didn’t see it or think it before… is this just a classic “Microsoft” interference project? ie. make original ChatGPT poorer, so that their version is automatically better?

Yes, yes… Ask ChatGPT: “How do you spell conspiracy?” But, MS have been there from the beginning (with OpenAI), working in the background… then suddenly in late November there is a weird corporate restructure/“Deja Vu cats” happened and Microsoft really stepped into the ring? And 3 months later, ChatGPT can rarely get out onto the Internet, or I am told that “error… repeatedly” or “unusual activity error”, or “web interface freezing after any input”,… etc.

Just saying… :sweat_smile:

1 Like

They must be trying to support Claude, which I have switched to from it just working so much better. I didn’t even choose to outright, it is out of necessity as my coding ai with claude is working so much better and ChatGPT is just falling apart more and more as if they just gave up when Claude arrived :stuck_out_tongue:

2 Likes

Sir,
I will not have you insulting Mr Altman.

Good luck with Grok

1 Like

Today is Sunday, the end of the week, and the rumour of GPT 4.5 being released has been proven just that.

1 Like