Sora default prompting is already available

When GPT was asked to help:
“Hi GPT! Please teach me how to prompt the video AI Sora.”

That’s the answer GPT has shared:
“Prompting Sora, or any AI designed to generate or manipulate video content, involves crafting a clear, detailed description of what you aim to achieve with your request. Here are some general guidelines and a template to help you create effective prompts for Sora or similar video AI tools:

1. Define Your Objective

Be clear about what you want the AI to do. Are you looking to generate a new video, edit an existing one, create animations, or something else? Your objective will shape the structure of your prompt.

2. Provide Context

If your request is about generating or editing specific content, describe the context. This includes the setting, characters, and any narrative elements you want to include or modify.

3. Specify Visual Details

Include descriptions of any visual elements such as colors, lighting, camera angles, and styles. The more detail you provide, the closer the output will be to your vision.

4. Mention Desired Length and Format

If you have a specific length (in seconds or minutes) or format (aspect ratio, resolution) in mind, mention it. This is crucial for the AI to produce content that fits your requirements.

5. Outline Audio Preferences

If your video requires specific audio elements like background music, voiceovers, or sound effects, describe these in detail. Specify if you want the AI to generate these elements or if you will provide them.

6. Consider Ethical and Copyright Guidelines

Make sure your prompt complies with ethical standards and copyright laws. Avoid requesting content that infringes on copyrights or involves sensitive topics without appropriate context.

Template for a Video AI Prompt

Objective: [Briefly describe what you want the video AI to do.]

Context: [Provide the setting, characters, and narrative for your video. Mention any relevant themes or messages.]

Visual Details: [Describe the visual style, colors, lighting, and any specific elements like objects or effects.]

Length and Format: [Specify the desired length and format of the video.]

Audio Preferences: [Detail any music, voiceovers, or sound effects you want included.]

Additional Instructions: [Any other requirements or preferences not covered above.]

Example Prompt for Sora

Objective: Generate a 1-minute video showcasing a futuristic cityscape.

Context: The video is a fly-through of a vibrant, futuristic city at dusk, highlighting advanced technology and architecture.

Visual Details: The city should have a neon color palette with towering skyscrapers, flying cars, and bustling streets. The camera moves smoothly through the city, with close-ups on interesting technological elements.

Length and Format: 1 minutes, 16:9 aspect ratio, 1080p resolution.

Audio Preferences: Ambient, electronic background music with futuristic sound effects for the city environment and technology.

Additional Instructions: Ensure the video has a positive, awe-inspiring tone, focusing on the marvels of future urban life.

Adjust the template and example according to your specific needs and the capabilities of the video AI tool you’re using. Remember, the quality of the output largely depends on how well you communicate your vision through the prompt.”

Even if Sora is still not available for the public this information can be very useful for anyone who want to prepare some prompts and send them to Sora when the time will come. :slight_smile:


That’s wonderful sir. So it includes audio prompting as well?

1 Like

Made up junk from a chatbot is of no use to Sora (where the prompts are shown in the blog) or this forum.


Hi. @_j thought it was official post?

Nope, try asking ChatGPT for the proper method to prompt the new video AI ‘Sora Ultimate’.
You will likely get a similar reply.

It is in fact a bit of bad luck that you didn’t get a reply like “due to my training cut-off date I am not aware of this new AI but you can follow the usual best practices for starters…”

The model ‘knows’ nothing that has not been part of it’s training data. And even then it will only guess what you want to hear.