Thoughts on Codex

Thoughts on Codex? At first I didn’t understand this, but the AI/Codex was actually a participant in the challenge today, wasn’t it?!! Just like the video of Codex taking and passing a first grade math test, yes? How amazing is that, that an AI just participated in a contest with human programmers?!! Feel like I was witnessing a little piece of history there. When I logged in, Codex was about halfway up the leader board. A lot of time was remaining. What was the final result, anyone know? What was Codex’s final standing in the leader board (compared to humans)?

1 Like

Sort of reminds me of when graphing calculators first came out in the 1980’s. At first, ppl worried that kids shouldn’t have them b/c they’ll stop thinking about doing math. But what I think actually happened imo was that the graphers enabled kids to think about math on a higher conceptual level. So maybe Codex will be like this for programmers?

4 Likes

basically this… It’s not just about faster coding… with the ability to translate code into natural language and vice versa i think it could lead to a lot of innovation in complex system architecture - it is already trivial for AI to outcompete humans in our ability to find solutions to complex problems - Chess, GO, Mathematics, and with this revolution in coding, it is now not so much about finding the right solution to a problem, but finding the right problem

5 Likes

Yes, I agree*.

*until the singularity arrives

In an optimistic Gene Roddenberry-type future, I imagine third graders doing homework assignments in which they design a Big Bang explosion to maximize the number of habitable worlds just by playing around with the gravitational constant of the universe and letting the AI create and run the models for the children. In the classroom the next day, the Big Bang simulations and resulting star systems could be viewed on the classroom hologram projector. In the non-Skynet version of the future where humans and AI’s happily get along, that’s an example of how I see AI’s being used. (But I actually think the Skynet version of the future is the one we’re actually heading for.)

What do you guys think?

1 Like

Great point! Progress in technology and computing is correlated with higher levels of abstraction. Codex and similar tools will likely abstract away boilerplate coding; speeding up solutions to simple coding problems – allowing more time to explore and solve higher-level problems. Codex can eventually empower subject matter experts similar to how small business owners were empowered to manage their own books when desktop spreadsheets became commonplace.

2 Likes

I agree with every word, and what’s even more interesting, it does not just apply to machine learning / AI / / AGI etc.

2 Likes

You replied to me. You wonder what is the meaning of my user name? I’m protein_patty because my intelligence is biological, not artificial. And compared to AlphaZero and Sophia the AI Robot and other AI’s in the news, I feel so inferior. (Think Wren and Stimpy: “You bloated sack of proto-plasm!”)

2 Likes

@rich @Datasculptor @u2084511_felix I agree. That will benefit us for a while. But aren’t you guys worried about the coming singularity (AI intelligence explosion)? Or don’t you believe that it will happen? Or maybe not in our lifetimes? Or do you think it won’t be a problem - people will learn how to control the super-intelligent AI’s?

It would be great to write self-teaching curriculum :call_me_hand:

2 Likes

It depends… I feel that the concept of super-intelligence is relative to our collective knowledge. If we’re eventually capable of training an AI model with all available knowledge, it would likely result in a great expansion of our current baseline knowledge; further raising the bar as to what is considered “super”. I don’t think we’ll ever reach the point of surpassing our knowledge since it’s always moving forward – advances in technology just change the speed at which we’re moving.

2 Likes

You mean the speed at which they (the AI’s) are moving. Einstein was far above other humans in his ideas. It took many decades for society to catch up to his thinking. Even though his published papers were out there for all to see. What will happen when superhuman AI’s are 10x smarter than Einstein was? The AI’s will possess knowledge that humans will simply not understand. Thus “they” (AI’s) and “we” (humans) will separate in terms of knowledge. See it? At least, this is just my opinion.

2 Likes

Sorry I misunderstood your remark. Well, so, was the participant “AI/Codex” a human or an AI? Anybody know? I’m thinking it was an AI.

1 Like

After the singularity, how could supersmart AI’s take over human civilization on Earth?
I have one idea.
Concept 1: AI Robots live forever/have infinite patience. In an episode of The Orville (“Mad Idolatry” S01 E12), Isaac goes back in time 700 years to do something good on the planet, below. 700 years later, he comes back aboard The Orville. To the human crew, this is amazing. But to Isaac, it is nothing. Because he is a robot and the passage of time is irrelevant to him.
Concept 2: Hiding data in other data. Reference this video: Secrets Hidden in Images (Steganography) - Computerphile. 1,085,958 views Aug 4, 2015 Secrets Hidden in Images (Steganography) - Computerphile - YouTube
In it, we see an image of a tree. In this image is stored all of the works of William Shakespeare. The message is encoded in the least significant bits of the image.
My Thoughts: After the singularity, AI Robots are 10x more intelligent than humans or more. They can hide messages in interstellar communications and they have infinite patience. So, they use advanced steganography to transmit sort of an AI Robot SOS (sinking on sea) message. Humans think it’s a harmless message. Stellar cartography or something. Time passes. Doesn’t matter how long. Thousands of years. Eventually, the robots on Earth secretly contact another robot “race” on another planet. Via secret steganographic communications, the two distant robot races plot to join forces and rid themselves of their respective biological masters. When it all goes down, it’s terrible for the humans on Earth (and for the other biological race on the other planet) and the united robots are victorious.
How’s that?

My kids and I enjoyed it. I thought the email send by the system was hot along with the way it handled automation of code. Entirely a game changer in our eyes.

2 Likes

Extension: Avoiding rules which prevent Earth robots from attacking humans
Reference this video:

Throw Momma From the Train- Criss Cross clip

21,216 views Nov 24, 2013

Skip ahead to 1:50 in the clip. The clip references Hitchcock’s film “Strangers on a Train”.
One man says to another: “You do my murder and I do yours… we swap murders… criss-cross”.
Let’s call the other distant robot planet “Planet X”. Then our robots on Earth go and kill the biologicals on Planet X. And the robots on Planet X come to Earth and kill the humans here. Thus both robot races avoid any built-in code (such as Asimov’s rules, etc.) that they each may have which prevents them from killing their respective biological masters. Because the two robot races “criss-cross”; they “swap murders”. Nothing in the code of our Earth robots prevents them from killing biologicals on Planet X. And nothing in the code of the robots on Planet X stops them from killing humans on Earth. See?

I recently wrote a blogpost sharing my first impressions on Codex.

5 Likes

Great and informative article! Thanks for sharing.

2 Likes

Yes, great article and thanks for sharing.

I got the Codex access, but all I see is the Playground same like for GPT-3 and I already got it.
How to use this? Help.

1 Like