What I always do is observe myself, to see and be aware of how something affects me, so that it doesn’t remain in the subconscious.
I must say, I was surprised at how quickly I took these AI tools for granted (literally in minutes), and like many others, complained about their weaknesses. Even though I actually know it’s a new technology and have at least a good idea of how these systems work, I still make demands that are, of course, on the developers’ to-do list.
I could usually keep the frustration under control, except when programming. When GPT destroys entire codes, I get annoyed! Even though I’m the idiot using young tech in a testing stage.
I’d like to correct a small misunderstanding. I wrote that I composed such texts as a child. I meant to imply that I have delved too deeply into too many things very early in my life. I had depression in kindergarten because I understood that there is war, poverty, and atomic bombs.
I’m a “true philosopher,” which means I can’t help but think more deeply about everything. And then it’s very, very hard to endure this world. I occasionally write that humanity is insane. Of course, that’s somewhat provocative and is meant to provoke thought (or to invite attack…), but unfortunately, I mean it much much much more seriously than I’d like.
Although I hadn’t actually addressed this before, the “Constitution for Humanity” is indeed somewhat childish, for many reasons. But I didn’t have that in mind, rather, that I had already pondered deeply about the evil in the world, and much more, at a very early age.
I’m venturing off-topic again here… but it also indirectly has to do with the moral discussions about AI. So…
I view all such texts like the “Constitution for Humanity” critically, including those that are now being drafted for AI.
In the best case, they are like New Year’s resolutions that one makes but never implements.
In the worse case, they are hypocrisy on both sides, similar to church visits only to be seen, and the sermons of priests who themselves commit all sins.
In the worst case, they are a fraud. I could play the devil’s advocate here and dissect every single sentence. But I don’t need to provoke hatred against myself, I’ve already been provocative enough here.
Unfortunately, in this world, “Constitutions for Humanity” are wish lists to Santa Claus. Because no one tells us how to implement them. They were never really part of an agenda that was supposed to be realized. And those who do it and tell us how they can be implemented, are crucified.
We humans tend to see ourselves in everything, mostly unconsciously, but it’s always there. In dealing with AI, this can lead to problems.
For example, you tried to teach GPT something, as you would with a child, a person, or a dog. You forgot, however, that your “training” doesn’t go into GPT’s actual training. When you start a new session, the memory is erased. All your training is gone, and you have the same system as before again. AI is not intelligent, it is a bad name for this tech. It is a pattern recognition and transformation system, and it can only recognize and transform what it’s fed. It cannot understand the data!
We communicate with AI and can’t help but see another person within it, even though AI has neither intelligence nor consciousness. When we see a robot with a face, we think it has feelings. But those are only our own feelings that we project onto it, similar to a dog that sees its reflection in a mirror and believes it’s another dog. Even though we know that AI is not a living being, it will take training to handle this. Some people won’t be able to, and it will cause suffering, as cause and effect always do.
If you’ve watched Star Trek, you’ll know that AI has been present there for a long time. Even there, AI was often used simply as a tool, questions were asked, and answers were received, from an almost all-knowing oracle. But mostly, it was used as a tool.
Then there was Delta, a conscious robot who started first without feelings, then developed some (actually he get a chip for it). And the AI sometimes created dangerously self-aware beings. All of these topics have already been explored in science fiction. I am more of a storyteller (or a fantasist…) by nature, so I may understand this side quite well. Good storytellers can understand things deeply without realizing it, and sometimes they even see a bit into the future, the magic of creativity.
If AI is used healthily, it will be like it was in Star Trek, a tool with a communication interface. And still, it will cause problems, like everything humans have ever invented.
But what can turn everithing in to s—, is organized evil…