Ukraine Ban (revisited)

I’m not sure if this is true. If OpenAI was a typical company, it would be, but they’ve intentionally shouldered a lot of responsibility for Safe and ethical AI as well as the idea that AI can also act as a public good for society at large. It’s built into their very corporate structure since there’s a cap on profit taking for shareholders and the ability to funnel profit into other activities that contribute to society. So given that’s the case, they have made the decision to steer away from particular topics, and those actually may be sound decisions given their goals and mandate. However, because of the larger responsibilities they’ve intentionally decided to shoulder I think it also means that these decisions are open to public discussion. In fact this is probably the single best time for nations, institutions, and others to engage in a discussion and dialogue about AI and where it’s going, and how to nudge it along safely so that it may benefit everyone.

1 Like