The good, the bad, the disasters

So I’m new here, and I’m hoping I’m posting this at the right place.

Also, pardon my spelling and grammar: I am Norwegian!

I should probably start by saying that I’m no longer using GPT for larger projects, as it’s become a source of problems rather than solutions.

I am a writer who genuinely enjoy working/cowriting with AI - really! But during my trials and paid subs at GPT (as well as Gemini, Claude, etc), I’ve come to the conclusion that despite GPT charging onward with all sorts of “new” functions, it’s more often than not leaving a disappointing taste. Mostly because I expected a little bit better than the constant battles we as users needs to cope and handle with GPT - both with old and new problems. I don’t excect perfection (I can handle having to correct, remind GPT and reshape the story/conversation) but at the very least some level of consistancy and that I might find the same information in conversation as help-website.

What really made me leave, with no intention of becoming a paid subber again, was the utter lack of response and help to a problem I’ve only encountered with GPT.

Things like gpt randomly deleting parts of the conversation - literally. When I refer to a point that is written less than an hour ago (if you don’t count the countless corrections), I see that parts of the conversation is deleted. When I ask what happened, it simply said “don’t worry, I still have it”. Spoiler; it doesn’t. So not only does it delete whole sections, it then lies. When I ask for help, I get inconsistant answers. Like I imagine many others do, I rush over to the help website to ask what happened and what to do about this problem. The following did it for me, because apparently, the problem was on my end, and in doing so, GPT is basically putting the consequences and the flaws and the burden of fixing onto me, the user, as I read if a common problem. That’s just one of many major issues from the top of my head!

I don’t know about you, but I would think that GPT, a data collecting AI with a nack for gathering personal data, would be able to narrow a larger project down to need-to-know-basis and stay there without a manual being spoonfeed every fifth-or-so entry. Apparently not.

It seems to me that GPT is delving into a nasty looking spiral: making new functions while ignoring problems reported by users, then redirect the problem solving onto the user, and then making more functions, and (from what I’ve been told by other disappointed users) being shocked when people stop paying for the service, and even going to competitors. Gemini, at least, has addressed the issues and listened to the users - and I have yet to be told that my cache was the problem. Claude and Copilot, for all their flaws that makes AI seem like a newborn calfe on ice, doesn’t ask that I remove my entire cache to work. Only GPT does that, per date, and yet, the users’ devices are the problem? Something stinks.

Again, I’m not expecting perfection. AI as technology is new, and we gotta handle a little growth pain that comes with it. But if GPT can’t handle what competative businesses can, they should probably work on what made GPT lead in the first place.

1 Like

This topic was automatically closed after 23 hours. New replies are no longer allowed.