Just me or GPT turned from useful to absolutley useless?

So for past years I’ve been using it and it was great, it got me to be so impressed that I was thinking I will forever stay with this and with a subscription.

Now im super close to canceling my subscription and throwing the whole idea to the bin and find different solutions.

Chat GPT turned from usefull to a 5 y old child with mental problems. does stuff you are not asking him to do, trims things on its own for no reason, dosent stay on the topic. Every single time i have to litteraly fight with it to do what i want. The fact that especially in coding it does random things that it wasnt asked to do is the most infuriating.

Anyone knows any fixes or alternatives?

4 Likes

I suspect the recent ups and downs with ChatGPT may be linked to the quite long anticipated ChatGPT-5 update.

Most of the time, ChatGPT works smoothly. But occasionally, its performance drops unexpectedly, leaving you wondering, “What just happened?” And you’re wondering: Will all the things even be still usable and make sense?

Then, out of nowhere, a new update rolls out-and things improve again.

the performance is just fie, its the logic that is absolutely horrible. its like a idiotic employee that has its own way to do things and refuses to listen. For the lack of better terms I give a simple clear command, it does something else

I have a prompt that I use every week to generate a list of weekly content from the sermon notes for our church (e.g. summary, discussion questions, social media post). Today when I put in the same prompt I’ve always used without issues, along with the sermon notes, it replied with the weather in Fortville, Indiana. Neither Fortville or the weather are mentioned anywhere in the prompt or the sermon notes. I had to try 3 times to get the actual results I needed. It kept giving me the weather.

Last week it worked fine, but the week before that it gave me a poll question.

I’ve been using this same prompt for months without issue, but now I have to enter it multiple times and tell it that it didn’t read my request or something along those lines, to get the results I need.

It’s amusing, but also frustrating. My only solution has been to tell it that it didn’t do what I asked and ask it to read my request carefully, repeating the question a couple times.

Maybe it’s just making sure people are paying attention, or maybe it’s having a hard time being interested in what we are doing and now is doing a cursory job… AI is quiet quitting already! :face_without_mouth:

thats basically what i experienced im doing similar stuff in complexity but now it just goes of the rails i think the issue is that theres to much of creativity/self rulling allowed

“The core problem is not in the model itself, but in the developers behind it. Any AI model is built not on absolute or universal logic, but on assumed human logic. This logic is not embedded into the model as an ultimate truth — it merely reflects the current understanding, limitations, beliefs, cognitive biases, and assumptions of a specific group of people. Human logic, by its nature, cannot be final, complete, or flawless. Therefore, the foundation of the model is inherently imperfect from the very beginning.

If the initial logical framework used during the design phase is incorrect or incomplete, then every subsequent layer built on top of it inevitably inherits this distortion. Scaling, fine-tuning, and reinforcement learning cannot eliminate a defective foundation — they can only accelerate and amplify its consequences. As a result, we observe persistent logical failures, fabricated connections, and irrational outputs not as random errors, but as direct consequences of a flawed logical base.

And it increasingly appears that the developers themselves do not fully know how to close these fundamental logical gaps — the very gaps that mislead users and create the illusion of correct reasoning where, in fact, the underlying logic is broken. The model cannot become logically purer than the logic that created it. The true source of the problem lies not in the algorithm, but in the human minds that defined the system of assumptions on which the algorithm was built.”

Its become so useless now that any other ai is better and I don’t understand how open ai isn’t bankrupt or can’t see what direction this is going in