Introducing Sora, our text-to-video model

We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.

Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.

Today, Sora is becoming available to red teamers to assess critical areas for harms or risks. We are also granting access to a number of visual artists, designers, and filmmakers to gain feedback on how to advance the model to be most helpful for creative professionals.

We’re sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon.


  1. Is Sora available right now to me?
    A: No, it is not yet widely available.

  2. Is there a waitlist or API access?
    A: Not as of Feb 16th, stay tuned!

  3. How can I get access to Sora?
    A: Stay tuned! We have not rolled out public access to Sora.

A new FAQ page was created but it is more of a placeholder for now.