ChatGPT 4 is worse than 3.5

ChatGPT 4 is absolutely terrible for me, 90% of addons are giving out errors all the time and not working at all. Answers are extremely slow its not even translating all of my messages. The image feature is horrible, it doesnt recognize neither text or task on the image correctly. Conclusion: for 99% tasks gpt 3.5 is better and gpt 4 is raw, slow, buggy and useless.

20$ a month is TOO expensive for something that doesnt even work properly.


I also get error messages in 90% of the times. What bothers me even more is the limit that you get as a paying customer while free users have unlimited access. It should be the other way around. Combine limit and errors and you get nowhere with GPT4. I have cancelled my subscription.


Bing is better while being 100% free, only thing gpt 4 is better at response time, bing is TOO slow

What do you mean Bing is better? It’s a search engine, not a chat bot, right?

Bing has a build-in ai, based on GPT-4. you can use it for free and you can even upload files. i

You are right! Is there a limit on how many prompts I can use?

5 in each session, but you can refresh the page to start a new conversation

1 Like

It’s unusable at the moment. Code blocks being messed up. Each time when you ask something, it gives a complete unneeded summary everytime instead of going to the point of what was asked. I’ve been using the service for almost a year now and GPT-4 has just become worse instead of better since launching…


I completely agree with you. Code blocks are broken, responses are incredibly slow and errors often fall, and often does not understand the context, can switch to English for no reason after requesting an image. Unfortunately, the chat is getting worse and worse every time. I hope there will be a competitor soon who will take into account all the shortcomings of openai.
p.s. just look at screen, 2 code blocks and text for one solution))))

1 Like

Just cancelled my subscription, vote with your wallet, 20$ is unacceptable for such a poor performance and it’s consistently getting worse. I wish we could get the late 2023 performance back. Tried a same prompt 10+ times a day and it never finished once. It repeated a same answer midway, twice too.


It annoys me so much!

I get 40 every 3 hours. Okay, I’ll work around that. But you ask a question and the response is like in the image above, broken and all over the place. So you regenerate it. That counts as one of your uses wether you can use it or not. So then after you click regen it works, right? No. 4/5 of the regen attempts will look like this -
Note the number of attempts next to that error.

Each one of those attempts counts as one of your 40 per 3 hours. It’s a joke. We are being ripped off!

Do they care? No.


Yeah, I just joined Plus and I am extremely disappointed, I didn’t even add nearly as many as 50 prompts in my first 30 mins as hit my usage cap already. Not impressed at all

1 Like

I confirm. Got the payed subscription, that doesn’t give you any hints that you’ll have a cap. I’m not having this any more. I’m out paying only one month. The cap is ridiculous. Get your shit together and buy more hardware.

did the same as others and agree with most. The prompting has turned from friendly to manager demanding language to get Chatgpt to produce a good/average or poor (you question summarised reponse, lost network response, suspicious behavior from your computer) response. I have cancelled my subscriptions after trying Team plan. My wife 44 and mum 74 have said it has lost and it now better to not use it. We also tried Chatgpt Teams the upgraded from Pro but to no avail, extra 10 bucks per user plus a force second subscription to just try it. Team is the same product as Pro when comes to chating ,coding and so on, you just pay more pre month… its memory is bad, it tells you search the net, bing search still sucks, why would teams use this product over a ChatGPT Pre Nov 2023. ChatGPT 4 turbo my ass it is alot worse, I am have to correct its grammar and spelling, of which I struggle with sometimes. Turbo for me was good for 1 week. I recommended Chatgpt and now I having to let everyone know it is almost no better then Bard or Microsoft Chat in Office365. O365 seems to have the old model which, like Chatgpt 4 before Nov 2023, but with a serve memory limit. I feel like I lost a work companion. My 6 months was a good ride that has now ended. I am looking forward to trying LLM on Huggingface, opensource models. Mybe even try to understand how to create LLMs…


I came here just to find a discussion about this. Up until the last few months, I was using it throughout my workday for helping with coding tasks (and many other things). The output is absolute garbage, currently.

I spend a large portion of my time telling ChatGPT what it’s doing wrong and having to start new chats to see if the answers will improve (they don’t). As a result, it’s currently costing me more time than it’s saving.

Not only does it output bad code examples, it also leaves code out that it has already been given in the previous prompt, loses context constantly - even on short chats, etc.

That certainly wasn’t the case before. I used it most of last year and it was a life-changer for me. Now I’m wondering why I’m paying for it.


When I first signed up to plus, everything was working great. I was pretty impressed. Then I got capped but mainly because I had to keep submitting my prompts because then kept crashing. It’s like I got lured in and then everything went to crap. Think I will cancel my plus membership until they can get their act together. Feels like back in the old mainframe days.

Update: I did what ChatGPT said and cleared my Google Chrome cache and things started working again. Go figure

1 Like

GPT4 is so bad, i cancel now subscription too.

  1. Slow
  2. Give me as developer terrible answer, always explaining, but chatgpt gives directly the corrected code back
  3. Talking to much

It might work for first few times, then it starts giving out errors again.


This all totally resembles my own experiences. I used chatGPT Pro for much of the last year and was just amazed by its helpfulness when doing research. I quickly turned an advocate recommending it to everyone.

Recently, I kept getting the feeling it has become much dumber, to the extent of not being usable anymore. This is extremely disappointing.

I just cancelled my subscription - hopefully OpenAI will notice that there is something wrong with their user experience.

1 Like

It has become next to completely unusable, and absolutely infuriating to use.

I used to never even notice the 40 message cap because the responses were so productive. Now, I can burn through 40 messages and fail to get one successful reply for reasonably straightforward tasks, for example, “Apply a rotation matrix to each of these 3 channels.”

I understand that they made changes to the way prompts are processed to try and keep up with demand, but whatever the macro or micro of this line of solution is, they severely need to reconsider pursuing it. It’s an abject failure, orders of magnitude worse than any performance losses due to censorship.

The model simply does not listen, doesn’t iterate with feedback, doesn’t even try. It has gone from being the most powerful tool in the world to being the most annoying, idiotic yes-man coworker in the world.