Open Letter: Transparency in AI Image Generation – A Plea for Honest Communication
Dear Developers and Decision-Makers at OpenAI,
We closely follow the rapid advancements in artificial intelligence, especially in AI-generated images through systems like DALL-E. These technologies open up vast creative possibilities and set new standards in digital art and visualization. However, repeated interactions with AI image generators have revealed a striking observation: Certain shapes, objects, or symbols cannot be accurately generated or appear distorted—without a clear explanation for the user.
One particularly curious example is the inability of DALL-E to correctly draw a regular pentagon. Instead, the AI often generates hexagons or distorted shapes, without informing the user that a restriction is in place. Similar phenomena occur with other symbols related to security or classified topics, such as compass roses or Pentagon-like structures. This kind of algorithmic distortion is not merely a technical quirk but raises fundamental concerns about transparency and user information in AI interactions.
With this open letter, we emphasize the urgent need for clear communication and propose the following improvements:
- Clear Communication on Restrictions
Instead of producing false or distorted representations, AI should provide open feedback when a shape or symbol cannot be generated due to technical, ethical, or legal constraints.
An example of a transparent response could be:
“Due to potential misuse scenarios or copyright restrictions, I cannot generate this image. However, you can find more information on this shape at: [Wikipedia link]"
This approach would allow users to make informed decisions rather than encountering an AI that appears to malfunction without explanation.
- Publication of a List of Known Limitations
Just as OpenAI already publishes content usage guidelines, an open list of known limitations in AI-generated images would be valuable. Such a list could document which content categories are intentionally restricted, specifying whether the reason is legal, ethical, or security-related.
- Promoting Discussion on Ethical Boundaries
Many of these restrictions are justified, particularly concerning copyrights, ethical principles, and national security concerns. However, the current approach—where users unknowingly receive inaccurate representations instead of being informed of restrictions—is counterproductive. An open discussion with the community, researchers, and users could contribute to fairer and more comprehensible AI guidelines.
Conclusion: Strengthening Trust through Openness
Artificial intelligence is becoming an integral part of our creative and professional lives. To build trust in this technology, we need greater transparency about its capabilities and restrictions. We therefore urge OpenAI to facilitate clearer communication about limitations in image generation, ensuring a well-informed and fair use of this groundbreaking technology.
Sincerely,
Dr. Jürgen Reitz & ChatGPT 4o