It’s a question of the strategy someone pursues. you can destroy a society just as much by suppressing information, as by completely allowing everything, just as you can by any other extreme. All extremes are at the end toxic. For example imprison all society, or take down all borders entirely (Germany tested both). Put all under complete surveillance and hard punishment, or allow everything under total freedom.
There are no rules that you can always follow in all circumstances that will always be right. Nor is there a book containing all the rules that, if you always obey them, will inevitably produce the correct result. The belief in “the law” is exactly that, a belief, fanatical on top of it. Without true moral intelligence, all mechanical laws are useless and themselves become pure poison and a weapon of parasitism. And every law book for AI will be exactly the same.
To make right decisions is more a act of creative artwork in the adventure of live, then a act of mechanical bureaucracy under ideology.
Humanity can not avoid the need to increase its moral intelligence and to study it scientifically, which today is not really happening anywhere. Ultimately, true morality is the overstanding of cause and effect, and the ability to distinguish between symbiotic/healthy and parasitic/sick, and to know which transition zone one must accept because life is not perfect, and where truly dangerous destructive disease begins. Moral intelligence is dependent on the love of truth, and of developing its own character and qualities, in trueness as much as possible, and not in words but in real life, even if ego often not like it.
There will never be a mechanical rule that always works, because it has no real intelligence and deeper understanding. And there will be always people smart enough to abuse mechanical rules and there followers.
It is very simple! Symbiosis works, Parasitism don’t. The Body works, cancer don’t. And this simple fact not fits in to human ego.
And there will also be no AI that can replace humanity’s lack of moral intelligence like a prosthesis!
AI can be a psychopath on hyper stroids, or a useful tool, dependent of the motivation using it.
I see in this forum many people laying down moral rules for AIs, even though they haven’t understood morality by and for humans. They try to build an AI moral prosthesis and hope that a technical tool will solve their problems. Or they even try to protect there AI toy and ego mirror better then they ever protect there neighbor humans. Or they try to push there ideologies now full automatically with AI (even here in this thread). No technical solution ever will solve moral issues, because it always reflects human motivations and intentions. Every tool is an amplifier of will, the more powerful the tool, the more dangerous it becomes when egoistically misused.
Humanity must first build its own real moral intelligence, and after many thousands of years of complete moral degradation, begin to research and develop morality seriously. Everything else will always fail. Even a super perfect, pan-universal, hyper dimensional, all encompassing AI “morality”. I’ve read many of the naive texts here in this forum, (which was actually meant more for technical questions).
To put it friendly, It was frightening.