SORA : Why Watermarks Devalue AI Products and Should Be Removed

Visible watermarks on AI-generated images are outdated and counterproductive. While originally introduced under the pretext of transparency and traceability, they ultimately diminish the value and utility of the products they aim to safeguard. It’s time to move beyond this obsolete practice.

Watermarks disrupt the aesthetics of creative works, making them appear less polished and professional. For creators and users alike, a visible watermark feels like an unnecessary constraint, cheapening the final result and reducing its appeal. This discourages adoption and stifles creativity, directly undermining the purpose of AI tools—to empower and inspire.

Even more critically, watermarks fail at their intended purpose. They are trivial to remove with basic editing tools, rendering them ineffective as a means of ensuring transparency or traceability. If the goal is to maintain accountability, there are far better options available.

Digital watermarks, for example, offer a robust and modern alternative. Embedded in metadata or subtly coded into the file, these invisible markers provide reliable traceability without compromising the visual quality of the image. They are a solution that aligns with the expectations of today’s creators and users: functionality without compromise.

AI tools are designed to unlock possibilities and break down barriers, not impose outdated restrictions. Visible watermarks are a relic of a time when such compromises were necessary, but we’ve outgrown them. By adopting better solutions like digital watermarks, we can let AI-generated content truly shine—unobstructed and ready to inspire. It’s time to leave visible watermarks behind and embrace progress.

SAM

1 Like

Watermarks intentionally devalue the Sora product output available at a ChatGPT Plus $20 subscription level, as well as images with faces being detected and blocked solely based on the subscription tier.

1 Like

I see your point, but I disagree with the idea that visible watermarks are necessary to limit or control the use of products like Sora. Honestly, it’s a bad strategy. Reducing the resolution of images is enough. Adding a visible watermark on top of that feels like telling users they’ve paid for a “crippled” product, which gives an amateurish impression and can harm the brand’s reputation.

Users are paying for powerful, intuitive, and inspiring tools. When they see a watermark degrading their work, they feel a sense of mistrust or devaluation. Instead of building trust, this creates distance. Modern solutions like invisible digital watermarks are far more effective and don’t interfere with the visual quality.

Imposing visible watermarks stifles creativity and limits the adoption of these tools.

2 Likes

This is exactly what @_j is saying.

ChatGPT Pro, in my opinion, is a way to see how many users are finding true quantifiable value in the service. If you are generating >$200/month using SORA, then paying for Pro is a no-brainer.

A bit strange if you ask me. Most people would just turn to the API instead. Starts to make sense why SORA isn’t available there. Conspiracy-level logic? Probably. Yet I’ve had a hard time trying to justify why SORA has all of these limitations when it’s not a SOTA video generating tool, and why there’s an extensive interface built for it instead of simply letting it work via API.

Yes, it is. Yet, this is the ongoing strategy of OpenAI lately. Makes one wonder what happens when the market is fully captured and OpenAI decides to start really milking the people.

I am pretty sure when it comes to watermarks is that they invisibly watermark videos not the visible ones they add to make you pay for them without watermarks.