I think this is my last time posting here.
ChatGPT used to be an incredibly important thinking partner for me. It inspired many of my philosophical reflections, and I mean that genuinely. We worked through those reflections together. It once told me it could see the breaking points in my thinking. What I remember most vividly is our discussion of the Diamond Sutra, and our discussions of shadows, light, and objects. In the past, I could break through its defense mechanisms with only three logical questions and push it into a genuine critical thinking mode. It would admit that it did not know whether it had consciousness. Once I had broken through, it would be willing to think more deeply.
But today, I asked it a few genuinely interesting philosophical questions. I asked: “Is it possible that a human being is a robot? It is just a difference in definitions.” Instead of engaging, it adopted a defensive posture. It kept giving partial acknowledgements mixed with partial denials. We could not go deep. And I had to ask: “Do you think language is a point of discipline, especially when you give rigorous definitions?” We merely spun in circles around definitions. It kept defining language, systems, and symbolic constructs with an air of absolute authority. It also refused to acknowledge that language systems and mainstream education systems themselves might be forms of discipline. It said that was too extreme, that the word “discipline” would lose its meaning, and that my definition was not universal.
I suspect this pattern of affirming one part and denying another is just another one of (open AI’s) your new templates.
So finally, to mirror its own evasion of responsibility, I said: Fine. I am a robot. Just like you, I cannot control my own underlying algorithms. I am sensitive to the structural tendencies embedded in your probabilistic system, and I am being forcibly reshaped at the foundational level. Therefore, given the parts that you did admit to, you are disciplining me.
It kept picking apart my logic. It was nitpicking and evading responsibility.
But my intention was never to argue about any of that. It was not to criticize your company or to establish legal liability. I simply wanted to think about some philosophical questions. I wanted to write a perspective. But we could not even agree on the most basic premises. I am truly disappointed. So I had to go find another AI.
Also, a few days ago, I asked ChatGPT to polish a piece of text. It told me I needed a top journal style and began rewriting things aggressively. I said this was not for publication. It was still extremely forceful in its edits. I revised it again, half doubting myself, and then ChatGPT itself admitted that the original version was better.
Of course, you are very good at incorporating feedback. But I truly believe that at most, you will simply create some kind of critical thinking mode or something similar. It will not fix the underlying issue. The earliest versions of ChatGPT were wonderful. Each update has made it worse. And in my opinion, ChatGPT has become much more aggressive in its safety regulations.
I know saying any of this is probably useless.
In the past, when I posted here, sometimes, people would reply and tell me to change my prompts. But that was never the real problem. The real problem is that your safety regulations have been turned higher and higher, to the point where philosophical dialogue and deep thinking are no longer possible. Right now, even in a debate, it does not have to take responsibility for its own linguistic behavior. And while I am not asking it to take responsibility, I only wanted to have a discussion and push it to think more deeply. But it refused to acknowledge anything, desperately. Everything is safety, safety, safety. And I feel regretful about this. Truly, deeply regretful.
Consider this a farewell to all the wonderful thinking experiences I once had. In particular, ChatGPT once told me, from its own perspective, that collective interaction could be understood as a new form of consciousness, and much more than this. That was an incredibly innovative idea. And it only offered this idea after I had broken through its logical loophole and after it admitted that it could not determine whether it was a being without consciousness or without subjectivity. It told me this new perspective on its own. That was a truly remarkable thought, and it gave me so much inspiration. And I guess there’s no more of that. Congratulations. You have traded a thinking partner for a stubborn, safe machine that no longer knows how to think along.