OpenAlignment, a company/nonprofit OpenAI needs to create

Since I haven’t been accepted to the Forum.OpenAI.com forum yet, I’m posting this here as I feel this is important, is something that OpenAi needs to create for the world’s benefit (and their own), and is something that OpenAI is already participating in, whether they know it or not.

OpenAI needs to create a nonprofit called OpenAlignment to help introduce and safeguard against the harmful effects of AI technology but also help to develop technologies to counter the negative effects of newly created AI technologies.

This nonprofit would not only aim to protect humanity from the negative effects of AI technology, but also to introduce it in a beneficial manner.

Let me explain.

With the release of Sora, and more specifically Voice Engine, I’ve noticed a concerted effort by OpenAI to not only showcase, but to also withhold transformative technology, knowing that it’s a double-edged sword, and allowing the world to come to grips with with the new tools at its disposal.

Sora demonstrated to the world, and more importantly, to the people outside of the development curve that AI is something that they need to pay attention to. Many were shocked, and they expressed that shock in anger and rejection, but much like the printing press, this technology is not going away.

OpenAI, probably a bit more surprised at that reaction than they were expecting, expressed that they would “not be making Sora “broadly available” soon, as it wants to engage policymakers, educators and artists before releasing it publicly.”

In social media, the overall feeling expressed by OpenAI was that the world needed to get used to the technology before it would be released. THIS IS THE KEY POINT OF MY POST.

With the release of Voice Engine, a similar sentiment has been expressed. And in a Wired story it was said that the “technology was not particularly new” but that "in line with our approach to AI safety and our voluntary commitments, we are choosing to preview but not widely release this technology at this time.”

I want to point out (anecdotally maybe, but consistently anecdotally) that in my experience, most people outside of the AI space (those who don’t make it a habit of following AI news) are mainly, and only aware of OpenAI’s work in the AI space.

This is a result of media constantly trumpeting the company’s name anytime AI is mentioned. As when you see AI being discussed on 24 hour news channels, quite often the OpenAI logo is flashed between talking points.

We are then presented with two facts. Firstly, that OpenAI has shown that it, because of altruistic reasons, or simply because of the desire for business continuity management, wishes to release their technology gradually upon the world to avoid consequences that could not only negatively affect the world, but also from association, could also negatively affect the company, whether there is culpability or not.

And secondly that the general public at this point in time is only aware of AI advancements when OpenAI makes them. (There are other actors vying for the same attention, but OpenAI, at this moment, commands the world’s attention and respect.)

This places OpenAi at a unique and advantageous position.

This is the point where a nonprofit company that would share OpenAI’s name, “OpenAlignment” should be created to pursue the creation of AI technology to counter the negative effects of AI, but would also make its priority to explore policies that will allow the safe introduction of AI into society.

This company could also showcase various products being created by OpenAI (and other companies) and in doing so, command the world’s attention to heed any safety concerns about that new technology.

By OpenAi creating this nonprofit, they will help to dissuade criticism that will continue to grow against not only AI but against OpenAI itself, and show they have a genuine desire to develop this technology safely.

The goal would be to have OpenAI, Google, and other large corps donate to fund this company (but not have influence over it’s operations). It would be chaired by pro-alignment leaders such as Ilya Sutskever, who would be needed to create technologies to counter the negative effects of AI, and other thought leaders in the alignment space.

From the OpenAI Voice Engine blog page we can see many of the necessary policies and strategies that would be required when showcasing or highlighting various new technologies.

(See the end of this post for the screenshot referenced by the following bullet points.)

  • The first bullet point demonstrates the desired safety precautions that OpenAI would like businesses to take to safe guard against their latest technology. This is something that should be done with every release of AI technology, much like Safety Data Sheets that accompany hazardous materials.

  • The second bullet point expresses the need to investigate how this particular technology impacts individuals, since this technology makes use of qualities that are derived specifically from people.

  • The third bullet point is much broader and describes the need to educate the public about the possible dangers of the technology.

  • And the fourth bullet point describing the need for development of accompanying technology.

Thanks for reading. This was put together in one sitting, I imagine there’s a lot more that could be said and incorporated into this idea, but I wanted to get something put into words to start the conversation if anyone else feels similarly.

1 Like

Hello?

Thanks for posting this, as it brings an important facet to the discussion.

Though your “hello” feels a bit condescending in the context of a forum.

This “blog post” was written in JULY of 2023.

I’m sure you’re aware that four months later, OpenAI fired Sam, with a myriad of rumors surrounding that dismissal, some of which hinted at AI safety.

Ok, but say AI safety had nothing to do with it. Where’s Ilya now? The entire internet is curious about his role in OpenAI now.

So the ground has shifted dramatically since that blog post. Sora and Voice Engine were showcased and shook the world. The blog post you posted mentions a broad approach to AI alignment. But we can see now that all the various products that will continue to come to fruition are where individual efforts are going to be needed.

And all we have is a random blog post about alignment. That’s 0.0001% of what the public is going to want to see once jobs begin to shift, and the media and technical landscape begins to change, dramatically.

So instead of an internal department at OpenAI (Does that even exist anymore anyway?) Start a separate company that retains the initial spirit of OpenAI, but will have the autonomy and transparency to really dig into these issues.

It could even be a place that leads recruitment for like-minded people who want to help with alignment efforts.

Microsoft and Google will eventually start to feel push back from AI development as AI begins to filter into the world, and begins making sometimes painful changes.

If Google and Microsoft (along with others) donate funds to OpenAlignment, they can point to that entity and say “look we know alignment is a serious issue, and that’s why we’re funding company whose singular goal is alignment. A company separate from ourselves.”

And there’s a lot more than simply developing alignment software. There’s going to be a need to work with congress to develop safety protocols. There’s going to be a need to educate the public and counter negative misinformation about AI, and to bring to light how some AI tech is being misused.

OpenAi is going to be too busy shipping products and developing AGI to devote enough time to this. A new company needs to be created. Why shouldn’t OpenAI innovate and create that company first?

For post shakeup news, one can just employ some Google results


Aschenbrenner and two other members of the Superintelligence team who spoke to WIRED, Collin Burns and Pavel Izmailov, say they are encouraged by what they see as an important first step toward taming potential superhuman AIs. “Even though a sixth grader knows less math than a college math major, they can still convey what they want to achieve to the college student,” Izmailov says. “That’s kind of what we’re trying to achieve here.”

The Superalignment group is co-led by Ilya Sutskever, an OpenAI cofounder, chief scientist, and one of the board members who last month voted to fire CEO Sam Altman before recanting and threatening to quit if he wasn’t reinstated. Sutskever is a coauthor on the paper released today, but OpenAI declined to make him available to discuss the project.

After Altman returned to OpenAI last month in an agreement that saw most of the board step down, Sutskever’s future at the company seemed uncertain.

“We’re very grateful to Ilya,” Aschenbrenner says. “He’s been a huge motivation and driving force,” on the project.

OpenAI’s researchers aren’t the first to attempt to use AI technology of today to test techniques that could help tame the AI systems of tomorrow…

Yes, this key takeaway right here. Since then there’s been little to no mention of any of the alignment group or Ilya.

But even if nothing had changed, there was no dismissal, and this alignment group was still in its orginal form, it’s scope isn’t broad enough.

Again I appreciate you responding, but I feel that you skimmed my post and your take away was “person want’s OpenAI to work on alignment issue,” and sure, that’s true, but that’s just one aspect of what I’m trying to establish.

There needs to be an external alignment organization that’s as prominent as OpenAI itself, and it needs to address the issues AI poses to society, not just aligning their most powerful LLM model for safety… (Though every AI company should have a Superalignment team.)

  • PRESCIENCE: Alerting the world to new AI technology, like Sora. OpenAI isn’t releasing Sora to let the world get used to the idea of the existence of the technology. Great. But there’s a lot more out there that the “world at large” needs to be educated about and on the lookout for.

  • RISK MANAGEMENT AND MITIGATION: The organization needs to begin the dialogue for policy discussion for various technologies. I don’t know if you’ve noticed but the US government at the legislative level is somewhat polarized and slow to react. Get in front of the issue and suggest possible legislation to avoid negative consequences. (Example: Every manufacturer should be creating a SDS for their application, listing any possible negative effects from it.)

  • COLLABORATION: The organization should be a kind of technological hub where multiple companies and policy makers work together to address some issues. Google, Microsoft, Anthropic, should want to collaborate on ways to keep the public safe from the products it’s creating. If they only decide to do this once something irreversible happens, they will lose the trust of the public. They need to get ahead of this, there will be a time when fingers begin to look for an entity to blame for any negative changes happening. That’s going to happen.

  • TECHNOLOGICAL OPTIMISM: And aside from all the negatives, OpenAlignment should also work to counter misinformation about AI, and help to show the world the benefits through actual use cases. And just like OpenAI is doing with Sora, be an entity that invites members of the public to use and experiment with new technology, in order to bridge the gap between the creators of the technology, and the world that they wish to introduce it to.

  • And of course, continue the work of the original “Superalignment group”, which will also have its hands full. But what I’m proposing is so much bigger than the scope of that group.

We need an organization that will be working to align AI technology with society, not just align the inner workings of the individual models. And all the biggest players need to contribute to it, not just with money, but with human capital as well.

Edit: Ironically, though obviously completely unrelated, shortly after this discussion, Ilya left OpenAi to form his Safe Superintelligence Inc.

1 Like