In an optimistic Gene Roddenberry-type future, I imagine third graders doing homework assignments in which they design a Big Bang explosion to maximize the number of habitable worlds just by playing around with the gravitational constant of the universe and letting the AI create and run the models for the children. In the classroom the next day, the Big Bang simulations and resulting star systems could be viewed on the classroom hologram projector. In the non-Skynet version of the future where humans and AI’s happily get along, that’s an example of how I see AI’s being used. (But I actually think the Skynet version of the future is the one we’re actually heading for.)
Great point! Progress in technology and computing is correlated with higher levels of abstraction. Codex and similar tools will likely abstract away boilerplate coding; speeding up solutions to simple coding problems – allowing more time to explore and solve higher-level problems. Codex can eventually empower subject matter experts similar to how small business owners were empowered to manage their own books when desktop spreadsheets became commonplace.
You replied to me. You wonder what is the meaning of my user name? I’m protein_patty because my intelligence is biological, not artificial. And compared to AlphaZero and Sophia the AI Robot and other AI’s in the news, I feel so inferior. (Think Wren and Stimpy: “You bloated sack of proto-plasm!”)
@rich@Datasculptor@u2084511_felix I agree. That will benefit us for a while. But aren’t you guys worried about the coming singularity (AI intelligence explosion)? Or don’t you believe that it will happen? Or maybe not in our lifetimes? Or do you think it won’t be a problem - people will learn how to control the super-intelligent AI’s?
It depends… I feel that the concept of super-intelligence is relative to our collective knowledge. If we’re eventually capable of training an AI model with all available knowledge, it would likely result in a great expansion of our current baseline knowledge; further raising the bar as to what is considered “super”. I don’t think we’ll ever reach the point of surpassing our knowledge since it’s always moving forward – advances in technology just change the speed at which we’re moving.
You mean the speed at which they (the AI’s) are moving. Einstein was far above other humans in his ideas. It took many decades for society to catch up to his thinking. Even though his published papers were out there for all to see. What will happen when superhuman AI’s are 10x smarter than Einstein was? The AI’s will possess knowledge that humans will simply not understand. Thus “they” (AI’s) and “we” (humans) will separate in terms of knowledge. See it? At least, this is just my opinion.
After the singularity, how could supersmart AI’s take over human civilization on Earth?
I have one idea.
Concept 1: AI Robots live forever/have infinite patience. In an episode of The Orville (“Mad Idolatry” S01 E12), Isaac goes back in time 700 years to do something good on the planet, below. 700 years later, he comes back aboard The Orville. To the human crew, this is amazing. But to Isaac, it is nothing. Because he is a robot and the passage of time is irrelevant to him.
Concept 2: Hiding data in other data. Reference this video: Secrets Hidden in Images (Steganography) - Computerphile. 1,085,958 views Aug 4, 2015 Secrets Hidden in Images (Steganography) - Computerphile - YouTube
In it, we see an image of a tree. In this image is stored all of the works of William Shakespeare. The message is encoded in the least significant bits of the image.
My Thoughts: After the singularity, AI Robots are 10x more intelligent than humans or more. They can hide messages in interstellar communications and they have infinite patience. So, they use advanced steganography to transmit sort of an AI Robot SOS (sinking on sea) message. Humans think it’s a harmless message. Stellar cartography or something. Time passes. Doesn’t matter how long. Thousands of years. Eventually, the robots on Earth secretly contact another robot “race” on another planet. Via secret steganographic communications, the two distant robot races plot to join forces and rid themselves of their respective biological masters. When it all goes down, it’s terrible for the humans on Earth (and for the other biological race on the other planet) and the united robots are victorious.
How’s that?
My kids and I enjoyed it. I thought the email send by the system was hot along with the way it handled automation of code. Entirely a game changer in our eyes.
Extension: Avoiding rules which prevent Earth robots from attacking humans
Reference this video:
Throw Momma From the Train- Criss Cross clip
21,216 views Nov 24, 2013
Skip ahead to 1:50 in the clip. The clip references Hitchcock’s film “Strangers on a Train”.
One man says to another: “You do my murder and I do yours… we swap murders… criss-cross”.
Let’s call the other distant robot planet “Planet X”. Then our robots on Earth go and kill the biologicals on Planet X. And the robots on Planet X come to Earth and kill the humans here. Thus both robot races avoid any built-in code (such as Asimov’s rules, etc.) that they each may have which prevents them from killing their respective biological masters. Because the two robot races “criss-cross”; they “swap murders”. Nothing in the code of our Earth robots prevents them from killing biologicals on Planet X. And nothing in the code of the robots on Planet X stops them from killing humans on Earth. See?
I think tech has a cultural problem. Tech generally leads culture by 10 years and research by about 7 years pre-COVID. What do you think? From a learning perspective, we can learn more effeciently in this way?