I just thought I’d make some points to future potential users and the VFX/Animation industry - Personally I see that Sora wont replace the VFX animation industry. I’ve been using computers since the 486 came out.
I’ve been in closed betas of Animation and character creation software, by proof of work showing examples how to use programs in ways it wasn’t designed.
I my mind I see that Sora should be used as a plugin with Character creation and animation tools and I’ll break down why it makes sense starting with the animation side.
- Animation programs usually have labels that could be used to identify props, characters, positions, scale, static vs non static etc. Animators should be able to use primitives and their labels to direct Sora faster than typing out angles, scale etc -especially with existing and older projects.
-
It could practically predict the story based on the contents of the project but Animation programs or the plugin itself for each project would need to have a text field where the story order is described - for non destructive editing
-
Sora should generate and provide users with a seed Generator value or a Sora description for style of terrain, tree or houses if project labels weren’t descriptive enough.
-
If a user erases a tree, the ground area is regenerated only to match and fill in the space like region rendering.
-
List item
You could tell the AI to exaggerate the arms and legs at particular keyframe -
Render stills for parts of a scene to test until you’re ready to pay the cost of generating the full scene.
Now for Character building
-
An AI tool could look at 3D characters from all different sides, take snap shots of reimagined characters and fidelity and save for reference with each project.
-
Clothing items and labels will help the AI with constancy and may find a better reference such as changing materials and the state of the materials, ie; new, damaged, static, non static, translucent with animated volumetric emissive details etc because all character props are identified, Sora may receive better text input to swap out materials and this in turn could be a seed value added to a prop or clothing item’s description.
-
With primitives, animators can direct the AI, with Gizmo Zone tools, labels and strength values, they will be able to isolate areas/zones where water crashes needs intensity or none at all with falloff values
-
Gizmo zone tools could help primitives be identified better such as fracture weakness and the zone and falloff of how far a collapse spreads.
Typing all this out for AI for a massive scene would be extremely slow and users would have to constantly go back and regenerate with new prompts to make corrections taking up valuable AI generation time, studio costs and higher fees towards Sora. -It doesn’t make sense for studios to rely on creating their vision with text alone - the time it could take generating the right scene could be the same or greater than paying an animator to do it.
Maybe the Devs already thought about this? I just don’t want to wait another 1-10 years before customers start demanding more necessary control. Character and animation tools would function as more of a focal point, an Art Director for Sora, a complement each other.