How to deal with "lazy" GPT 4

I’ve used free versions of Claude and chatGPT: for free, I can concur.

Claude :green_circle: seems to “think” through the problem a little more BUT only after you give detailed instructions… Whereas starting out chat :green_circle: had the lead. That matters less, though, because once you getting into coding you want the bot to have “learned your style”. chat behaves like each prompt is an one-off prompt: it does not take that extra step.

After the details, Claude :green_circle: follows through on every scenario vs telling it to do so for every scenario (eg, if I want output masked a certain way- it starts to mask all similar outputs with the same masking :+1:t5:; chat requires me to explicitly say that).

Here’s how I see Free (chatGPT “chat” vs Claude.ai “Claude”). Seems like Paid is similar.

Needs less initial direction?
chat :green_circle:

Needs less direction overall?
Claude :green_circle:

Less overall issues?
Free version of chat truncates code a lot. :disappointed:
From this thread, it seems here that paid does similar.
Claude :green_circle:

Extra coding expertise?
Free Claude gets automatically adding error handling a little better, too. Just a little bit.
Claude :green_circle: or
about the same chat/Claude :heavy_minus_sign:

Bot that has minimal code to solve the problem?
I looked over the two codebases and I began to see extra lines of code in chat
Claude :green_circle:

Bot that has more thorough commenting?
chat :green_circle:

Solving their own errors?
About the same.
chat/Claude :heavy_minus_sign:

Cool features?
Paid chat has a seemingly very useful permanent setting of “always provide all code “ as a screenshot above showed. I probably shouldn’t score this section.
chat :green_circle:

I still use both because I’m testing the bots on how well they code. I use free because I’m testing and $40 a month :blush: it a bit steep price for testing.

Hope things get better every release!

I just had something happen right now that I never in my freaking life thought I would see.

ChatGPT-4 just made the kind of coding mistake I could expect from ChatGPT-3, but never have I seen this before in ChatGPT-4. Like code I’ve done SOOO many times, and it just got stuck, it could NOT figure it out.

I provided the output, tried to explain the problem, it wouldn’t listen, it just kept finding new stupid ways to do it wrong…

So I thought, let’s see if I can get Gemini Advance, the AI I’ve been talking crap about since Bard was in open beta, to explain the problem to ChatGPT-4…

Bard… I mean Gemini, instead of explaining it, just output the correct code, first time.

When Gemini is doing better than ChatGPT-4 on code, you know you’ve flushed your model down the toilet.

FYI, this wasn’t some complex problem either, this was basic syntax.
ChatGPT-4 is absolutely useless freaking garbage when it comes to coding right now!

I never expected to see the day Bard/Gemini would smash GPT-4 on code, especially when the code I’m working with is a proprietary language that OpenAI has a contract with for training!

1 Like

I’ve noticed this too, especially with the placeholders. This began to annoy me. My solution was to switch to copilot because at least it was cheaper, integrated into vs code and is more specialised.

I also noticed a developing flat out refusal to engage in certain conversations that are controversial. It also seemed bias in answers related to contemporary politics. Something it actually could identify when prompted to scan it’s answer for persuasive language.

I really hope Open AI open it up a bit more. Currently it makes no sense to pay the price for ChatGPT when the free version is on offer and the paid version is just as lazy, bias and refusing to engage.

I wish OpenAI all the best and I totally respect what they’ve built. I understand that they probably put up the ‘guardrails’ to protect, but I don’t think the ‘guardrails’ are working. They’re simply counterproductive.

1 Like

As what other posters here have said, ChatGPT4 is dumbed down and “abbreviated” as they removed the continue generating, the context placeholder became very much shorter.

Also it sometimes produces errors, not to mention sometimes it produces “network generation error” and the system has detected unusualy activities and I have to regenerate again and again while wasting tokens of 40 times/3 hours.

Not to mention, even with meticulous prompting, sometimes ChatGPT4 reverts back to its ‘dumbed’ or ‘lecture’ mode, or outright refusing to produce expected results, making so many excuses.

Whatever, we should not make these excuses by those AI “overlords” with defective products and get away with it due to “safety” (which is totally BS)

2 Likes

I’ve also noticed this. It regularly ends up in circular reasoning, akin to the old 3.5. It’s still clearly better, but I also “feel” it’s been regressing lately.

Hi all,

Has there been any update to ‘possible fixes’? I have been trying to have chat generate some basic presentations with specific content only to have it respond with general outlines… it has been disappointing to say the least.

It’s become garbage, it can’t even write basic code anymore.
We’ll never know when they decide to fix it because they never admit or talk about anything they do unless it makes the news.

They won’t ever admit they made it suck, let alone tell us when they fix it.
Prompts can get around certain things, but they can’t fix model stupidity.

It’s so bad it can’t even understand basic code syntax, everything I feed it, it just breaks. I’m done with this garbage until its fixed, it’s never been this useless since GPT-4 launched.

I’ve been using ChatGPT since it was still on a waitlist and I had the 3 and 3.5 APIs long before I could use 4, so I’ve had plenty of experience with different models and I can tell you with zero doubt that GPT-4 right now is at least as bad as 3.5 Turbo was the last time I was using it at code.

It’s so stupid and lazy if I didn’t know better I’d swear it was like a sped teenage boy. Everything you tell it to do it just tells you how to do it yourself and when you make it do it, it won’t do it right.

This thing is absolutely intolerable at this point, so horrible I started using Gemini today until I get access to Claude figured out. I freaking hate Gemini, but at least Gemini can get basic freaking code syntax right usually. Earlier what I was doing was taking the moronic ChatGPT-4 outputs, putting them in Gemini, then having Gemini explain to ChatGPT all the crap it did wrong because the more I have to type it in the more pissed off and belligerent I get. The thing was so stupid it couldn’t even figure out the syntax for wrapping the parameters inside a function.

3 Likes

It’s vindicating to read this thread, because I’ve been a ChatGPT plus user for more than a year and it’s only in the last couple of months I’ve given real thought to unsubscribing.

I understand products break and that the technology is not quite there yet. But when the product regresses at the pace ChatGPT is regressing, that’s when something is seriously going wrong. And it’s not like we can expect answers from ClosedAI anytime soon.

5 Likes

If they train more on public forums, and the public forums are full of people who say “why don’t you try this google search?” for answers, then, well, that’s what we get …

I tried to inquire about enterprise, which I know how much it costs, it’s expensive, but I also know they don’t mess with it the way they mess with Plus. I expected them to ignore me, but instead they suggested I use teams. So I took the opportunity to tell them exactly what I think of Plus and how I would not pay for higher rate limits for the garbage I’m getting right now.

I see the people complaining about broken rate limits, but I can’t get that far. I woulf pay the teams price to get the ChatGPT-4 I had last week, but this garbage isn’t even useful.

Recently, model brealing problems have included things such as, undeclared variables, the notion of creating a function to do what it was trying to make up variables for, passing a variable on to a module, and executing a module with the same key press that the module initiated. These are basic things, yet even if I manually type the correct code or provide a source reference for context, it could not overcome these challenges.

I wasted over 3 hours trying to get it to help me resolve a problem it caused, unsuccessfully, that caused me to go over the entire system line by line before I found out it had subtly changed the order of arguments passed into a module and even looking over the module and the script that launched it, it continued to do so repeatedly. They were close arguments, so visually I didn’t notice when it did it, but as an AI model, it should have known that. Like that’s such basic stuff. You can’t declare a module with an order of arguments passed then change it when you initiate the module, it crosses the data stream so you end up with switched variables.

I tried to simplify what I asked of it. Only asking it for help with smaller code blocks or one function at a time, it can’t even do that. It can’t get syntax right and now all thr sudden it keeps trying to use deprecated code. Like last night I told it, “you can’t load an animation that way, that’s deprecated and doesn’t work anymore, this is how you properly load an animation… [provided one of the lines that was wrong] then [gave example of how it should load]”…

ChatGPT-4 "You’re right, I apologize, here’s the corrected version without using any deprecated code…[proceeds to provide the exact same deprecated code I just told it was wrong].

I understand for most people using ChatGPT-4 for writing, these kinds of changes aren’t as impacted, though I imagine many still are. However, when you use it for code, having it fluxuate between very useful one day to totally useless the next is infuriating.

One day it is helping me handle a bigger workload and the next using it slows me down so much I can’t keep up with my workload because it has become a burden so I’m stuck with a workload I wouldn’t have taken on had I known it would suddenly become trash.

I know this particular platform has a deal with OpenAI, and it’s not a small deal. So that company has provided OpenAI with nearly all reference data that exists to train with. That includes their knowledge base on things like what code is deprecated. There’s no goid reason for ChatGPT-4 to be completely broken by these basic things, especially when they deprecate slowly, so the stuff that’s actually broken and won’t work is usually years old stuff so you know it’s not a matter of their training data being too old. The stuff it was doing last night was deprecated before ChatGPT even existed.

What makes it worse is everything I’m doing now, I’ve done with ChatGPT since 3.5 Turbo. Once they came out with 3.5 Turbo 16k, the API didn’t have these issues. It feels like the only solution is to update my chatbot and see if I can get the API to be more useful. I just need to make a feature to save conversations and to update the conversations with context. Right now all I have is the option to copy chat (so I can paste it as context), or copy selection so I can get the code out of it without the whole chat. I haven’t even looked recently at changes in the API to see what options there might be.

3 Likes

Public forums training is not relevant to things like code, syntax, following instructions, summarizing outputs, or any of the problems the people in this thread are having.

There are things the model should do and improve over time, and things it shouldn’t. The update when they took away the continue generating button was an obvious update, it changed the way the model works, and it is obvious that update included things not related to training data that made it useless for a lot of things it previously worked well on.

If you’re using AI for research or like some lawyer did, you should expect to have to verify data, but when you use it for things like code, there’s no data it’s going to pick up on a forum somewhere that’s going to set the model back to 3.5 level code interpreting or suddenly make it completely unable to recognize basic syntax problems.

The only thing that causes stuff like this is when they change the context allowance and how much processing time it’s allowed to devote to filling your request. That’s why some things are less impacted than others. Code generation is probably the most impacted by this kind of throttling.

1 Like

It’s undeniable that ChatGPT-4 isn’t what it once was. Having utilised it since the ChatGPT-3 days and now with the latest version, ChatGPT-4. I’ve relied on it for research, summarizing documents, creating bullet points, drafting emails, and correcting grammatical errors. For these relatively straightforward tasks, it’s not performing well and often provides generic responses.
Some might argue it’s all about the prompt. However, when using the same prompts as before, I don’t receive the quality of responses I used to get. I got the Pop.ai/ (unlimited plan- extra cost) to enhance my research capabilities.
If anyone in this community could recommend a better platform, I’d be grateful. My primary use of AI is for researching, reading documents, and summarising regulations, precedents, and research findings in different countries.

4 Likes

I echo the sentiments here. I’m a plus subscriber since Dec 2023 and have been using ChatGPT since Nov 2023, I have noticed a drop in quality in coding responses. I still find it useful but on anything fairly complex it usually needs many prompts, it usually misses out code, changes variable names needlessly. I still find it useful but I often need to amend what it gives me.

I tried Claude 3.0 Opus and if I wouldn’t have known I was using a different AI product I would have said it was GPT4 as it didn’t give any better responses imo. I only tried it briefly though.

Edit: tbh thinking about it it could be atleast partly down to me. My requests have become more complex as my app has developed and so I assume it’s harder for the AI to get it right. Also, with Claude and ChatGPT both often giving not what I consider to be brilliant responses maybe it’s down to my prompts.

Edit: Today I used the same coding question prompt for both AI products, the Claude response was better. Neither response worked but Claude gave the full code solution.

3 Likes

I still use free ChatGPT with GPT 3.5 and it seems it became lazier too.
Was thinking about going Plus, but from what I read here, it now seems like a waste of money until something changes. Any word on when the new GPT may roll out?

Yes definitely don’t waste your money. I would suggest to use Gemini. The rollout of GPT 4.5 is rumoured to be in the summer.

4 Likes

It is extremely frustrating that we have so much feedback on this topic here in the forum and that OpenAI will never reply or react to any of it … We are at the dawn of LLM public use and already their monopoly has become so big and strong that it is too big to fail, therefore giving a s*** about customer feedback.

It seems this ship has already sailed, which is very disapointing. :pensive:
I really hope anthropic / Claude will handle this better, because OpenAI has apparently decided to switch sides to the dark side already.

2 Likes

I think they are confident that as long as they release a superior model we will all “forgive” them.

1 Like

I don’t even have lots of words for these companies. I keep jumping from one platform to another in search of a model that adheres to my instructions. My use-case doesn’t even need a GPT-4-level usage, so after jumping to alternative gpt-3.5-level models that solve my issues with CAI, I don’t bother a lot with what’s going on this end except that I keep coming to check from time to time if there’s any hope of stable Assistants API. Of course I know there are alternative sources that are cooking this feature as well, so the more CAI keeps categorizing priority based on how much you spill for them, the more I’ll (negligible dev) keep running to platforms that have less of this and still have the capabilities I’d think of missing this end.

1 Like

Let’s face it: This is the truth, isn’t it?

But, what they don’t realize is that very few of us are going to “forget”. And I’m thinking that a day will come when the question isn’t so much “who has the best model?” as “who has the best company I want to work with?”

2 Likes

I think it’s the truth for users who mostly interface via the Chat and who have a relatively low switching cost. But if you are trying to build a business using their API and they screw you over while you pay a fortune you lose trust.

4 Likes