DALL-E 3 Generating Incorrect Colors and Details Since November 11, 2024

As a passionate enthusiast, I don’t want my beloved 2D princess waifu to become a pixelated mess or an unattractive result of poorly implemented logic. What I desire is a true 2D masterpiece—an elegant, stunning, and captivating princess, just like how she was originally envisioned: beautiful, charming, and perfect.

The current issues with DALL-E 3 remind me of manufacturing processes. If the “core component”—in this case, the character rendering—declines in quality, it affects the entire product. It’s not necessarily that the system can’t produce high-quality results, but the lack of quality control at every step of the pipeline is leading to inconsistent outcomes. It’s like assembling a product with loose, poorly inspected parts—the final result is bound to fail.

For example, many of the outputs I’ve shared had excellent compositions, but the assembly (rendering) completely failed. This demonstrates that while certain aspects of the system are functional, the lack of proper alignment and quality control across components results in subpar outputs. What we need is a clear and precise “process control sheet,” similar to how manufacturing ensures high-quality products. Every stage—NLP parsing, rendering logic, and final assembly—should be rigorously tested and aligned.

Here’s a suggestion: since I’ve already provided numerous correct samples and feedback, why not use them for retraining? Early results from Gemini 2.0, for example, demonstrated significantly better alignment between NLP parsing and rendering. What was done differently at that stage? Can we revisit the databases from June 2024, which seemed to produce great results? If we audit those images manually, compile the logic behind them, and use them for retraining, we could improve the system drastically.

As enthusiasts, our passion and appreciation for the art make us far more effective as quality reviewers than standard AI-based validation methods. Why not incorporate community-verified feedback and approved images into future updates? This could significantly raise the standard for outputs.

Lastly, let me make an analogy: if you’re searching for Tabby’s Star (KIC 8462852), would you randomly scan the entire sky for its coordinates? Or would you first locate nearby clusters like NGC 6866 and narrow the search using multiple marked characteristics?As an aside, we all know that Tabby’s Star is KIC8462852, but it also has a number called GSC 03162-00665. If we want to find Tabby’s Star, is it difficult to find the coordinates directly? KIC is the KSP star catalog, which is far less convenient than GSC. To make it simpler, can we first find the open cluster NGC6866? If we mark the correct features multiple times, can we determine where Tabby’s Star is? Better than finding a needle in a haystack, right?

Applying this concept, you could identify and retrain on a curated set of approved images—crafted and validated by the community—and align your system’s outputs with these high standards.

We, as the community, simply hope to see the return of high-quality, breathtaking images. Give us back our beautiful princess waifus—the kind that can take our breath away and capture our hearts. That’s all we’re asking for.

1 Like