Consistent Variability Using Seeding with DALL·E-3

I’ve been diving deeper into our favorite generative art tool, DALL·E-3. I’ve been especially intrigued by its “seeding” capability. It’s fascinating how this little parameter can influence the consistency or diversity of the artwork. And here’s a pro tip for those using ChatGPT with DALL·E-3: just specify the seed you want to use, and ChatGPT ensures it’s applied in the DALL·E-3 render.

Experiment Setup:

  1. Prompt: “An older man with a distinctive hairstyle, peering cross-eyed at a daisy perched on the tip of his nose.” Seed: 4567
  2. Prompt: “An older man with a distinctive hairstyle, peering cross-eyed at a tulip perched on the tip of his nose.” Seed: 4567
  3. Prompt: “An older man with a distinctive hairstyle, peering cross-eyed at a tulip perched on the tip of his nose.” Seed: 9876342598



Observations:

  • Using an identical seed with the same prompt ensures a consistent image output, invaluable for retaining a particular style.
  • A minor prompt change while retaining the seed results in an image that maintains the overall aesthetic but varies in specific details.
  • Using a different seed, even with a slight prompt adjustment, can produce a significantly different image, highlighting the seed’s influence on the final output.

The capability to dictate the degree of consistency or variability in images, simply by tweaking the seed, offers immense potential for content

creators and artists. I’m keen to explore further and would love to hear from others who’ve experimented in this domain.

2 Likes

Image consistency is one of the main challenges with D3, as you don’t have the same toolset as stable diffusion, specifically controlnet. I tried creating consistent characters and style across scenes and tuned the prompt as I progressed. After selecting a particular image that captured the style and character set, I used that image as the reference/ seed. I then utilised elements in the prompt (provided below) to maintain consistency across scenes, providing feedback to gpt4 on whether consistency was maintained in the image or not. Success rate was fairly good, but you will see from the images that the characters weren’t wholly consistent. I’m not sure whether you can do img2img, that’s what I’ll look at next.

  1. Reiteration of Visuals: Consistently emphasize specific visual characteristics of the characters from the initial image in every prompt. This includes their posture, facial expressions, attire, and other noticeable features.
    2. Seed Value: The seed value plays a critical role in guiding the generation process. By using the same seed value consistently, it can help to produce images that are visually similar.
    3. Prompt Length: Longer, more detailed prompts can sometimes lead to better results, as they provide DALL·E with more context. By elaborating on the desired outcome, we can guide the generation towards more consistent results.
    4. Graphic Style: By explicitly stating the desired graphic style in the prompt, we can ensure the images match the style of the initial image.
    5. Character Descriptions: By providing explicit and detailed descriptions of Benny and Remy based on the first image, we can attempt to get more consistent character designs in subsequent images.



1 Like

Wonderful, but with the new chatGPT/dall e 3 these method is down :sob:

1 Like

Why isn’t this possible with the API yet though… clearly its already built…

1 Like

I would really want this to work in the API as well. It’s making me consider Midjourney that has this feature. All corporate use-cases for our clients need multiple images and all of them want similar styles. Is there any update on this?