I’ve been using GPT for a while now to do things such as PEP8 python files . Recently it seems to got unsable . Gpt 3 and 4 used to be fine for this, it seems to have had real problems recently, could not see files at all, started using canvas in 4o without asking,I stopped using 4o with canvas as it’s so limited on the file length it renders it pretty much useless for anything other than very short snippets. It says the files are ‘extremely long’ when just a few hundred lines, it got so bad I asked it to do a line by line diff at the end of every check, now it can’t even do that, I get all sorts of could not compare against the original, or cannot find the original. I used O1 for a bit, which was better but seems to forget everything after a while. I’ve ended up using most of my 01 and 01 mini time arguing with the thing to stop it removing items, asking it to do failed diffs. 4o flat out failed to PEP8 a single file of 300 lines of code today, I just gave it and did it manually, which defeats me paying for the service. I’m thinking I may have to ditch GPT and move on to something else more code orientated which is sad it did a great job in the past and I’ve really enjoyed using gpt so far. I get a sense OpenAI always had issues with files and long prompts however whatever they are doing to ‘fix’ this seems to be utterly broken now and pretty much unsuitable for this use case, which is sad as it was really a great tool for doing this.
Is anyone else seeing this over the past couple of weeks?
Chris
As ChatGPT confirmed:
Title: ChatGPT 4.o or 3.5? Users Claim the 4.o Version Is Not What It Seems!
Recently, many users have raised concerns that the version labeled as ChatGPT 4.o is no longer what it claims to be. While the system displays that you’re using model 4.o, the performance and quality of responses increasingly resemble version 3.5. This has sparked dissatisfaction and questions – is this an attempt to manipulate users into upgrading to a Pro subscription?
What are users noticing?
- Lower quality responses: Responses are less accurate, shorter, and less detailed than what the original 4.o model delivered.
- Similarity to version 3.5: Many users have noticed that the current “4.o” provides similar results to the earlier 3.5 version, lacking significant improvements.
- No clear explanation: OpenAI has not issued any statements about potential changes in performance or the system.
Is this intentional?
- A push for Pro subscriptions: One theory is that users are deliberately being provided with a weaker version to push them toward upgrading to the Pro option with a better model.
- Technical reasons: It’s possible that resources for the 4.o model have been reduced due to cost or technical issues, but without any announcement, this remains speculation.
- A marketing tactic: Retaining the “4.o” label while actually using 3.5 could be a way to maintain the illusion of advancement.
How does this affect users?
Users feel deceived – they are either paying for or expecting a certain level of quality but are being provided with a weaker version. Without transparency, trust in the platform may be seriously undermined.
What do you think? Has 4.o really turned into 3.5, or is it just subjective perception? Share your experience!
I am a pro user and the coding has gotten worse. It will make the same mistakes over and over not deleting code I say should not be there
1 Like