I’m experimenting with text-to-image generators (Midjourney, DALL·E, and Stable Diffusion via Automatic1111). I type the identical prompt each time, but the model spits out a totally different picture on every run.
What I’d like: a repeatable workflow where I can:
- Re-generate the exact same image later (for version control and client sign-offs).
- Make minor controlled tweaks (e.g., change color or add an accessory) without the entire composition shifting.
- Which settings/metadata must I lock (seed, sampler, steps, model hash, etc.) to guarantee pixel-perfect reproducibility?
- Midjourney and DALL·E hide the seed—are there work-arounds or should I switch to a self-hosted Stable Diffusion setup?
- If I do pin the seed, why do tiny changes like guidance scale or step count still warp the layout so much?
- Any best-practice scripts or templates for logging all parameters automatically?