Has anyone else experienced style degradation with image generation?
I spent months carefully developing a custom art style using photo references, controlled prompt language, and consistent structure. For a long time, the outputs were highly stable and matched the visual identity I intentionally curated.
Since the latest model changes, that consistency is gone. Even with identical prompts, the results now feel generic, flattened, and disconnected from the original style. The level of control I previously had no longer exists.
I’ve attempted multiple workarounds, but the model no longer responds the same way to specificity or reinforcement. It feels like the system prioritizes safer, more default outputs over nuanced style adherence.
This is especially frustrating as a paying user who invested significant time refining a workflow that no longer functions as intended.
Is there any way to:
-
Lock a visual style?
-
Access previous image-generation behavior?
-
Or prevent styles from degrading across model updates?
I’m happy to share before/after examples. Curious if others are seeing this too.
The image on the left I made with my curated prompts before the latest model update. The image on the right was created today with the same prompts I always used not the same style and looking generic as hell.
