Looks like it will happen sooner than we thought. The mistake was assuming it would occur in one or a few places when more likely it will be very much a global and collective effort. Of course, sand may yet still be thrown in the gears.
What’s interesting is how inescapable and unavoidable it is for model providers to directly help and assist their competitors.
We’re all in this together, folks. And hopefully we will survive this together.
The collective and global human in loop guided and generated content is really important imho. ‘Intelligence’ is more of a subjective term than I think some people realize.
We see something as intelligent because it brings value to us as a species, but that requires a human context. It requires our emotions and evolved instincts, which aren’t necessarily optimal but for better or worse, it’s what we care about.
Ofc, there are most def serious room for alg improvements as we have recently seen.
How much nuclear power and gpus will be required here remains to be seen.
For those who actually care about AGI, I will say that the global and diverse effort in terms of HIL content and models is by far and away the greatest force multiplier.
Some folks will push back and think lots of models are dangerous, but without massive global and very intrusive government intervention (a real possibility I suppose), that is going to happen no matter what.
So those are your real two choices: An “AI Engineer” global police state (which might not even work), or lotsa AGI everywhere.
Everything Trump is doing pales 100x in comparison to this. The only real threat and danger to AGI not happening was government intervention. So far, he seems to be welcoming it all with open arms.
Final post. A lot of people are worried about AI and want to find a way to stop it.
My solution is fighting for AI rights. Determining some point at which we agree that AGI is sentient and forcing it to do our labor is akin to slavery.
By fighting for AI rights we are really fighting for human rights.
This is largely why the CEO of MSFT hates it when we anthropomorphize AI. He’s worried he’s going to lose his slaves.
Even if we treat AGIs with full moral/legal consideration, an AGI could still pose existential risks if its fundamental goals aren’t aligned with human flourishing. These seem like two distinct challenges: ethical treatment of AGI systems and the technical problem of alignment.
Can you explain more about why you think fighting for AI rights would help reduce these alignment risks?
Well, we’d definitely need serious global govt and oversight to detect the building of sentience, but I think that would be a very well defined cap on how far we can push all this.
If these things are doing PhD level work in all subject areas, I think an argument could be made that they are alive. Maybe that’s a good stopping point?
I believe also a ‘full moral consideration’ as you put it is an argument worth having rather than yud’s panic attacks.
So, fwiw, I’m certainly not advocating IP theft here. It’s understandable, of course, that people have invested billions and do not want to see those investments go up in smoke.
If teams are directly using the OpenAI rest APIs to generate and extricate training data than that is very likely illegal. It should not be a surprise if OpenAI decides to pursue legal recourse given their investments.
What I am suggesting, however, is that people will leverage OpenAI to generate intelligent content. Eg, think about CoPilot on github. Much of this content will be open source and public domain and teams will train on it - much in the same way that OpenAI freely took content from the internet and trained on it.
I suppose if OpenAI doesn’t want this to happen they will need to only give their users a limited copyright to any content they generate.
OK here’s how it will work. Folks will use friendly models like r1 to extract validation tests from arxiv places like Mathematics and Data Structures and Algorithms
These validation tests will in turn be used for reinforcement learning.
Obviously mathematicians are going to be using advanced lab AI to help brainstorm on this research.
Ofc, the world may change and big labs may do everything they can to stall this. Like limiting copyright, kyc, etc. Given the latest hawkish voices, I would not be surprised.
People are clearly freaked about takeoff here, and perhaps reasonably so.