Months of Curated Art Styles Broken by Recent Image Model Changes

Has anyone else experienced style degradation with image generation?

I spent months carefully developing a custom art style using photo references, controlled prompt language, and consistent structure. For a long time, the outputs were highly stable and matched the visual identity I intentionally curated.

Since the latest model changes, that consistency is gone. Even with identical prompts, the results now feel generic, flattened, and disconnected from the original style. The level of control I previously had no longer exists.

I’ve attempted multiple workarounds, but the model no longer responds the same way to specificity or reinforcement. It feels like the system prioritizes safer, more default outputs over nuanced style adherence.

This is especially frustrating as a paying user who invested significant time refining a workflow that no longer functions as intended.

Is there any way to:

  • Lock a visual style?

  • Access previous image-generation behavior?

  • Or prevent styles from degrading across model updates?

I’m happy to share before/after examples. Curious if others are seeing this too.

The image on the left I made with my curated prompts before the latest model update. The image on the right was created today with the same prompts I always used not the same style and looking generic as hell.

Have you tried uploading your preferred style image to the new model, and have it extrapolate from there? Prompts don’t translate since the updated model may have different weights.

1 Like

Thanks for the reply, I never deleted that conversation so I went back and copy and pasted everything. And it just gives me these generic looking images. So much work to get where I wanted and now it’s all lost :sob::sob: