I have found that when using the prompt from the image into a generated video the quality is better if you have it. Why is there no way to bring that into the new generation automatically?
Why is there no way to keep context like with chatgpt why does sora lose context and thus lose quality in remixed generation?