The "new" Dall-E, uses seeds without seeds

OpenAI is giving us functions and killing them the same day.

When you wanted to generate an image “based on an existing (already generated) image”, we could use seeds.

In the beginning, the seed was always fixed at 5.000

But for a couple of weeks we were able to set the seed by ourself, resulting in much more freedom of creation.

Seeds are gone

In the most recent update, the seed option is removed.

They are still used under the hood, and you can retrieve them, but you can’t set them yourself.

Instead of the seed, we now have the generation ID, which is another thing and somehow a container for all the parameters (including the seed, but not limited to) of an existing image.

The trick

You can however still use “seeds” without the ability to use them.

  1. Generate an image
  2. Ask for the generation ID (gen_id)
  3. Create another image and refer to this gen_id by the new parameter referenced_image_ids
  4. Use the exact same prompt as the original image
  5. Clearly state “don’t change anything”

This will create an exact copy of the previous image, although there can be some minor changes (numeric noise, the digital equivalent of analogue grain / noise).

Important : generating by gen_id only works in the same session, since this super-global is not stored inbetween sessions

  1. If you want to use a seed for an existing image, you can add grammatical noise at the end of the original prompt
  2. This “noise” is just a random string of numbers and digits
  3. The longer the string, the more the new image will be different from the original image


This is the original image with ID YL7K6llhpo1tQGO5.

Refer to YL7K6llhpo1tQGO5 and add color to it… you see the image is quite different, but that’s because adding color is a major change, not subtle. The ID of this image is fnvkQGRhpkrIcJf4.

As a test, I created the same image by referring to it’s ID fnvkQGRhpkrIcJf4. The look almost the same, but there are small differences (numeric noise, see at the end of this post).

This is the exact same prompt and also refers to fnvkQGRhpkrIcJf4, but it has 8 useless characters at the end of the prompt B4FYho2W. The image has differences, but is close to the original.

This is the exact same prompt and also refers to fnvkQGRhpkrIcJf4, but it has 16 useless characters at the end of the prompt xsltYKtzghO7OAds. The image has even more differences, but is somehow close to the original.

This is the exact same prompt and also refers to fnvkQGRhpkrIcJf4, but it has 32 useless characters at the end of the prompt Lx4Re5EjXCSxYUuPYZum6hZHxMZkRNMJ. Now the image really starts to change, the look and feel become different (but it is in some way still the same image).


So the longer the end-string, the more difference are included.

If you want to create those random strings, you can head over to Strong Password Generator and set your length / difficulty.

As a bonus, I did refer once again to fnvkQGRhpkrIcJf4 and altered the JSON to be landscape, not square; it’s the same image, but the style has changed a bit.

And this is “the same image” by referring to it’s ID. There are differences, numeric noise, but that’s minor. E.g. the blue sleeve in the original has stripes and in the re-generation it’s solid. Also the cat’s tail in the middle is somehow different, as are some leaves. But it’s for 97.5% the same.


Very detailed and useful information. Thank you very much.

In short : the complexity (length, numbers, digits, special characters, etc…) determine the strength of the seed.

How more complex the string, the more distance the seed will create between the original image and the new one.

I don’t know the mathematical code (key) when “a difference kicks in at which point”, but in short that’s how it’s works.

  1. Create an image of a laughing clown, partying rock steady.
  2. Create an image of a laughing clown, partying rock steady. FJ33
  3. Create an image of a laughing clown, partying rock steady. FJ33AVJ38
  4. Create an image of a laughing clown, partying rock steady. FJ33-AVJ_38!
  5. Create an image of a laughing clown, partying rock steady. FJ33-AVJ_38!3B49~0

Always refer to image 1, use the exact same prompt and ad some “noise” at the end. Every step will differ more from the starting point.

1 Like

I tried the method once again and it still works after 4 hours (which is amazing, since OpenAI is changing things per minute now…).

But the wokeness of the system drives me nuts.

Don’t smoke

I took this image as my starting point;

I wanted to add some scrambled code at the end to create a “seed”. But ChatGPT did change my prompt every single time (it kept the prompt, but removed the scrambled part).

After asking 5 times he made clear why he did it;

I am not allowed to create pictures of people smoking.


Don’t copy

So I decided to create the same image as a starting point, but now with lollypops;

Again, I added a “seed” in the prompt and asked to regenerate the image with some scrambled text on the end… it refused… because…

I am not allowed to create images that looks like existing styles, e.g. “America '50s commercial poster”.


Final try

The image above is my starting image, let’s says it’s #ID is ABC123.

I referenced ABC123 and added a string of 16 characters at the end, you can see small differences in the hand of the lady.

Than I referenced ABC123 once again and added a string of 32 characters at the end, not only is the hand changing once again, but the whole style becomes slightly different (more clear, more flat). The #ID of this image is DEF456.

Finally I referenced DEF456 (the last image, which was referencing ABC123) and changed the color of the ladies hair and dress.


It does work, but be sure to check the send JSON data BEFORE image creation and AFTER, because the system is changing things without saying it does, because of “content policies”.


For those needing seeds, I created a simple script that generates some in different length / strength.


function app__seed( $app__seed_limit ) {

  $app__seed_chrs = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ~!@#$%^&*()_+-=';

  $app__seed = '';

  for ( $app__seed_index = 0; $app__seed_index < $app__seed_limit; $app__seed_index++ ) {

    $app__chr = trim( $app__seed_chrs[ rand( 0, strlen ( $app__seed_chrs ) - 1 ) ] );

    if ( empty ( $app__chr ) ) {

      $app__chr = '0';


    $app__seed .= $app__chr;


  echo '<li><span>' . sprintf( '%03d', strlen( $app__seed ) ) . '</span>' . ' <input type="text" value="' . $app__seed . '" autocorrect="false" readonly></li>';


?><!DOCTYPE html>
<html lang="en">
  <meta charset="UTF-8">
  <meta http-equiv="X-UA-Compatible" content="IE=edge">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <link rel="shortcut icon" href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAIAAAACACAYAAADDPmHLAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDcuMi1jMDAwIDc5LjU2NmViYzViNCwgMjAyMi8wNS8wOS0wODoyNTo1NSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIDIzLjQgKFdpbmRvd3MpIiB4bXBNTTpJbnN0YW5jZUlEPSJ4bXAuaWlkOjcyQkFFRkQzN0I2NDExRUVCNDUyOTE1OUVFRkMzMzEyIiB4bXBNTTpEb2N1bWVudElEPSJ4bXAuZGlkOjcyQkFFRkQ0N0I2NDExRUVCNDUyOTE1OUVFRkMzMzEyIj4gPHhtcE1NOkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6NzJCQUVGRDE3QjY0MTFFRUI0NTI5MTU5RUVGQzMzMTIiIHN0UmVmOmRvY3VtZW50SUQ9InhtcC5kaWQ6NzJCQUVGRDI3QjY0MTFFRUI0NTI5MTU5RUVGQzMzMTIiLz4gPC9yZGY6RGVzY3JpcHRpb24+IDwvcmRmOlJERj4gPC94OnhtcG1ldGE+IDw/eHBhY2tldCBlbmQ9InIiPz7dy8RhAAAIKklEQVR42uxdC5COVRh+hS13lkQ1tamkKZUuSo1CNV10oYskTWOiMqhmaiomkppBpntpupAR00VSjVS6DpkpbUUiSoRiWDJhd4W1vU/f2ea3du3/f3vO953L+8w886/1f9+e77zP957znst76pSXl5MgXBwkVSACEIgABCIAgQhAIAIQiAAEIgCBCEAgAhCIAAQiAIEIQCACEIgABD6iXm1vMGbMmNDqLI/ZinkM8zBmAbMFsy3zUGZjZjNmI+YhinnqZavL3MvczSxTLGXuUCxh/sXcyNzK3MBcyyxirmZuqVyYUaNGpSsAz3EE81RmR2Z75vHqs2UKdQehrGeuYi5lTmYuSd0DeNYcdmJ2UTyN2cGiZrIB81jFi5lNmANFALUDKrMHszuzK/NIh8re1oo+gIM4gXkVsxfzXIefo50IIHvkM69j9mNe4MkzFahmYLsIoHp0Zg5i9mE29ezZEF0cpTqEIoBKuJI5THWWfI9SRAAZ6M281/G2PdGOoC8CwJv+IPP8wDq0bUIXAMK4caqDFyJa1/YGLs8FjGCuCNj4FdFNcB7gDOYLzLNIkB+aB7iPWSjG/x/NQvEAeNBpzCvE5vugUQgCwNs+S8W8gn3R0PcmAEO3C8X4YQrgAeZ0sbFZD25rEzCBohE9QYACeI45RGybFer71gQ866nxsQ5wl5cuRCPGMoc6aFwYFos31zA3q58rPpcxm1M0V3ETaRi4qYQyXwRwp+r02Y5tzB+Z31I0DYuh6F8pWsVbFbCu8GaKViDlG/IszgugJ/Npi42OMHQu80tl+G1ZXDNEibq94bI57wEwmzfbQqMvYM5gzlFveLYYwBxO0fLxJFDqugA+t8jofzOnMqcwv8/x2suZDzPPTLjMJS4L4C2K1rSljQ0q+niRol05uXqw8cxrUyq7sx4AQ7zXW9Che4z5RMyKRKf1UYq2e6UFJz0AesNpD/FOUm11UYxrMTk1MQV3XxW21vYGaQwEvZ1iha1QMfnAmMYfraICG4xPMZqs1D0Alm91T6mysIpoaMzYub3qHHaxLFopckkAaCtfTamiMBgzLea1CO1eIjvnTTa5JAD0lhsnXEF/qBAt7jZqGH4Q2Yv1rggA69fvSbhyfmBeGLOjVMCcyTyd7MafrgjgqYQrZh6zGzPOcSiXKOM3IvuxzoUoAG/TDQlWynyKdgDHMT46iR85YvzNOjxAEgIYn2ClLKb428Mep2hE0BX8Rg5MBmHzYp8Ee8Tnxbx2Brm3w2iljpuYFsDwBCsExi+OEZp+qvoLruFn2wWQl2AI1TvGG4GQFNO+p5CbWGq7AG6hKIuFaTzDfDfHa5Dm7WvmceQuFtsugGEJtYN35XgNkjx+R3ZMRdcm/l+t40amooCTKEquaBq57hXM98D4QKGuG5kSwICEwssVObb5PhifVPNltQD6Ga4AzILlsoq4rqq0AvIDC2wWQGfSlMXyAMg1RepXqlnyAcW2ewDTS70WMd/P4ftYdXwO+QMMde+2WQC9DFfA4By+i0UgPckvfKzzZroF0M5wbF2Yg/u7n3kH+YcPbBbAZYYfPtuOH7ZijfPQ+L9QbhtVEhdAd8MP/1kW30M28PfIT8zUfUPdAuhq8OEnZPEdjGzOI3/xus0CwMhfa0MPjo0bU7P43ocGy5A24PqX2CyAsw0rv6YECzi96iKP3/7JJm6qUwCdDT78yzX8fzfmSPIbk2wXQCdDD76uhtCvoe7QyEKgaSuyWQCY9z/R0MPXtJVsNmnIl2c5xpq6sS4BYJzd1EraA4U+d1N6W82SAhZ/zrddAB0MlQ+bOqqb+SpgPkn+4yGTN9clAFMpUeYe4P/mBGB8rP2f7oIATCVDqm7kb6TBPodNMJ45TZcACgyVr6q272gV8/uOTaZCPxMCMHHkKnL3LK/i97MoDCSSMVWHAHBk+uEGyrawit/dbnC8wSb8RAllUtEhABjfRKKkyqnaEGZODOTt75/UH9LlAUy9BZl4hdw+5SxbIIvKYpcE0MZQ2TL3viEpU98AjI+DoG9L8g/qEEArQxWxKuPfrwXi+rGgdo9rAmhhoFwY/vxH/XwjmRtptAlTSPOCz6QEYCINembmi+cDMD7OGhiQxh/WIYCmBspVkX9/hCEPYxtSW8iiQwAmpmIrxgAeCcD4yKCy0mUBmMj9hzGAawII+5BsekaaBdCRH6CB5jIh8RGyfA323PiY5Ut9GVs9S+6Rib0BtP3o7fe3oSA6XGx9zWWq77nxsW/hUlsKY6MAfMYXqnkjnwQgyA5YwdTDtkLpEEC52LZGYFOHldvUxQOYx2jmrbYWTkcPvkxsXG000zftOD8JD7BLbL0fcFZBR9uNr0sAu8Xe+wCZS3HQxDIXCltPPIA2YAq74rwBZ6DDAxSL7f87a6CDa8bX5QF2BGx4DOli88YiVx9APEA84Bj5qyka0l3k8oPo8ADbAzM8zhp+w5cH0iGArQEY/hOKlqZ5l31MBFA9cC4vjrjH/rxCX5UtAtgfSMfyJvOdEJo3HQLY4ngd7KRoGzoSUCMfwe8h9WZ1CKDIwefGtrMFyvBYoLEx1DhWhwA2O9CWY1gWp4V8o3ryK0mg1QMgiyUSN+SlbGiUA2nlsK9weQZLxNTmBIC5gJOZzSnaKNpSsbX6xO+xgxjr/LCHoAlFm0nw88EULSnD9vI6FE0slSnuUYYrVtypjLxRfaLzuVZ5oDVKiKVi0txQp7xcFvSEDFkRJAIQiAAEIgCBCEAgAhCIAAQiAIEIQCACEIgABCIAgQhAIAIQiAAEXuJfAQYAGFldADihnscAAAAASUVORK5CYII=">

    input {
      font-family: monospace, 'Courier';

    body {
      background-color: #222;
      margin: 4rem 1rem;

    span {
      opacity: .5;
      font-size: 75%;
      display: inline-block;
      width: 4rem;

    li {
      color: #8b8b8b;
      font-size: 2rem;
      font-weight: bold;

    input {
      background-color: #333;
      border-radius: .5rem;
      border: none;
      width: calc( 100% - 8rem );
      padding: .5rem 1rem;
      outline: none;
      cursor: pointer;

    input.copied {
      background-color: #363;
      color: #163716;
    ol {
      list-style: none;
      padding: 0;

    li {
      margin: 1rem 0;

    div {
      position: absolute;
      top: 0;
      left: 0;
      width: 0;
      height: .75rem;
      padding: .5rem 0 .75rem 0;
      font-weight: bold;
      background-color: #111;
      color: #444;
      transition: width 10s ease-out;
      text-align: center;

    div.fired {
      width: 100%;




  app__seed( 8 );
  app__seed( 16 );
  app__seed( 32 );
  app__seed( 64 );
  app__seed( 128 );



  let $app__progress = document.getElementsByTagName( 'div' )[ 0 ];
  let $app__progress_delay = parseFloat( getComputedStyle( $app__progress ) [ 'transitionDuration' ] );

  $app__progress.innerHTML = $app__progress_delay + 'S';

  $app__progress.addEventListener( 'transitionend', function() {

    location.href = location.href;

  } );

  setTimeout ( 

    function() {

      $app__progress.classList.add( 'fired' );

  }, 10 );

  document.body.addEventListener( 'click', function( $app__seed_event ) {

    let $app__seed_next = $;
    let $app__seed_prev = document.getElementsByClassName( 'copied' );

    if ( $app__seed_next.nodeName.toLowerCase() === 'input' ) {

      navigator.clipboard.writeText( $app__seed_next.value ).then( function() {

      if ( $app__seed_prev.length ) {

        $app__seed_prev[ 0 ].classList.remove( 'copied' );


      $app__seed_next.classList.add( 'copied' );

      }, function( $app__seed_error ) {

        alert( 'Copy not succeeded.' )

      } );



  } );



You can copy / paste the code (which is a simple mixture of PHP / JS / CSS and HTML) and save the file on a webserver (that supports PHP).

Than open it in your browser, it will automatically refresh every 10 seconds (so new seeds are generated).

A single click / touch on a seed will copy the value to your local devices clipboard (so you can paste it straight into ChatGPT).

It’s plain text, nothing encrypted or something (but I can’t give support for it, just take it or delete it).

And the “long encrypted (hashed) line of text” is nothing more than a base64 encoded image of a… seed (the icon for the page);


It’s an all in one file, so no separate assets needed.

This is gold, thank you very much.

Do you know if this can also be used with Dall-e API?

This will create an exact copy of the previous image, although there can be some minor changes (numeric noise, the digital equivalent of analogue grain / noise).

It appears this no longer works. When I asked DALL-E 3 why it was not working as you said, it basically said that seeds were required to replicate the same image, and that having a referenced_image_ids only gave it an image to base it off of. It gave me this example:

Imagine you’re telling a story about a magical forest.

  1. Just using the same prompt: It’s like asking a friend to draw the magical forest based only on your story. Each time you ask, your friend might draw a different looking forest because you only gave the general idea.
  2. Using the same prompt and an image reference: Now, imagine showing your friend a specific picture of a magical forest and then asking them to draw it again based on your story and that picture. The new drawing will likely be more similar to the picture you showed, but it still won’t be exactly the same, because your friend might remember some details differently or add their own touches.

In short, using an image reference helps guide DALL·E to create an image closer to the reference, but it still might have some variations each time.

I don’t know, I don’t have access to the API.

That’s Dall-E2 I guess?

But I thing ChatGPT is just a front-end for the API and that the end-points on the server are the same.

So when you state clearly you want to add some random stuff (AND DO NOT TOUCH IT!), it must work (with Dall-E3 at least, which doesn’t have a API right now…).

Important notice

ChatGPT acts smarts, but is stupid.

It keeps removing the scrambled text, even when I say DO NOT ALTER IT!, because he says “it's useless”.

So before creating an image, be sure to check what he is going to send to Dall-E.

Also, with longer string, he things it’s rocket science;

I must clarify that the last part of the text you’ve provided seems to be a string of characters that do not form coherent or relevant content to the image description. It resembles an encryption key or a random string, which doesn’t influence the image creation process.

If it’s intended to be part of the image, please advise on its purpose. If it’s not essential to the content of the image, I will proceed without it to ensure the integrity and quality of the image produced. Please confirm how you would like to proceed.

I told him “I just wanted to see how thing worked out with that string”, and then he used it.

Up to +/- 32 characters is no problem, but with 64 characters or more he starts complaining (not sure if this depends on the total length of your string, or that 64 is a fixed integer).

It works both perfect and flawless.

I just reopened an older session (four days old) and ask to recreate an image by it’s ID.

It was not a 100% copy (but I don’t need that, since I have the original I am referring to).

There was numeric noise present (the digital equivalent to analoge grain), but the look / feel / composition / etc… was spot on.


The image above is my original image I created days ago.

This image (above) is a recreation, based on the ID of the original. There are differences (check the foot on the right, of the penguin in the middle for instance), but I think the atmosphere / design / characters / etc… are quite the same.

This again a recreation, but with 32 random characters added at the end (the shoes are gone).

And here I added 64 characters, just to see how much of a difference it would make (the chicken went into fries and shrimps… I don’t know why, but hey… that’s what scrambling does).


Once again, this is the original and it is serving chicken and pasta.

I referenced the image and told him to serve pasta only.

Final test…I don’t like pasta, give me chicken only.

1 Like

I just tried via API connecting to Dall-e 2. I couldn’t find any chain of characters that resembles the gen_id, which is supposed to be 16 chars long.

I checked the EXIF data and the URL of the generated image.

And according to what I asked chatgpt, they don’t provide this data explicitly.

Did anyone have any success with this, IF it’s actually possible?

It’s Dall-E 3 exclusive.

They introduced it about a week ago.

The gen_id is sort of the single point of entry to fetch all (meta-)data for an image.

It contains the prompt, size (dimensions), timestamp, references, etc…

As long as you have access to the session, you can retrieve this data (and iterate any image based upon it).

Raw export

Can you give me the generation ID of the last image and tell me if there were any referenced image ID's used?

The generation ID for the last image is 7DXoYydUrrdyEgBh. Yes, there was a referenced image ID used, which is IxaLAlUsK5rhpyt1.

What is the size of 7DXoYydUrrdyEgBh ?

The size of the image with the generation ID 7DXoYydUrrdyEgBh is 1024x1024 pixels.


1 Like

Ok I just figured it out. I read what you were saying in another thread, that it has to be in the same session for it to work. So thanks for that.

I hope they bring back being able change seeds, I was making some really cool images with those.

Thanks a lot for the detailed analysis. I did an earlier analysis of the seed (in the ancient times when the seed was clamped at 5000…), got very happy when the seed was allowed to change, but that was ephemeral.

I am frankly still a bit unclear about how gen_id exactly works under the hood, and it is annoying that it doesn’t work across different sessions, but well, better than nothing.

Grammatical noise is what we used in the ancient times of seed=5000, but unfortunately it does not quite enforce the same degree of variability that actually changing the seed would have. But again, better than nothing.

It worked perfectly for me, thank you!

// sudoLang implementation
Dalle {
  // State to store image requests and generated images
  imageRequests: []
  generatedImages: []
  referencedImageIds: [] // New field to store image IDs for referencing

  // Method to create an image based on a description using JSON format
  text2imJSON(description, size = "1024x1024", n = 2, referenced_image_ids = []) {
    // Convert the description and other parameters to a JSON format
    jsonPrompt = {
      "size": size,
      "n": n,
      "prompt": description,
      "referenced_image_ids": referenced_image_ids

    // Create an image based on the JSON formatted prompt
    text2im(jsonPrompt) |> add to generatedImages
    // Store the referenced image IDs
    add referenced_image_ids to referencedImageIds

  // Method to replicate an image using its generation ID
  replicateByGenId(gen_id, grammatical_noise = "") {
    // Use the same prompt and gen_id to replicate the image
    // 'grammatical_noise' can be added to the prompt for variations
    originalPrompt = findPromptByGenId(gen_id)
    newPrompt = originalPrompt + " " + grammatical_noise
    text2imJSON(newPrompt, "1024x1024", 1, [gen_id])

  // Helper method to find the original prompt by gen_id
  findPromptByGenId(gen_id) {
    return imageRequests.find(request => request.gen_id === gen_id).prompt

  // Method to get the list of generated images along with metadata
  getGeneratedImagesWithMetadata() {
    // Return the list of generated images with metadata such as gen_id and seed
    return => {
      return {
        image: image,
        gen_id: image.gen_id,
        seed: image.seed

Yeah, the freedom of seeds was the best thing ever.

I think you can see the gen_id as a kind “wrapper” of all the information regarding an existing image.

So when you have a gen_id and use it as a reference for a new image in the referencing_image_ids that single gen_id contains things like;

  • Original dimension
  • Prompt
  • Seed (which is still used, but can’t be set)
  • Metadata
  • Context
  • More things we don’t even know of…

In stead of repeating all those separate parameters, you can “just” refer to the ID and all things that “made that image to the image it became” are encapsulated in it… (I think).

1 Like

Was anyone able to retrieve the gen_id via dall-e 3 API?

I don’t use the API, but at least when the API was launched, the gen_id was not available (by API).

@Foo-Bar Thanks for putting this detailed explanation together.

I am a little confused with the process/utility of the noise creation aspect of this as opposed to focused iterative prompting.

Below are some examples of character (I like to call him Alabaster Cowley) images I created without focusing on adding noise and instead used iterative prompting. The noise seems to only increase the level of randomness (as varying the seed used to) within the image.

I am not typically looking for randomness in the majority of my image generations. Is the noise meant to give you ideas for adjustments or am I missing something? Thanks for the clarification!


Well, I don’t know…

No seeds

The thing is, we used to have access to seeds.

A seed was something like “the same image, but with a touch - every time you used another seed”.

So basically the approach (from Dall-E) for that specific image was the same (as long as you kept the same prompt).

Characters, lightning, composition, material design, colors, etc… were always kind of the same and the “seed” only changed those things a little bit; giving you more variations for the same subject.

Every image (same prompt, different seed) was recognizable as an image 'made in the same session, with the same prompt, and the same “look and feel”… but different as well, because of the seed used.

Seeds 'r gone

Now seeds are gone, we have no seeds left (they are still used, you can ask them after a creation) but we can not force to set them our self.

The “scrambled text” on the end of a prompt is “kind of a seed thing but not really a seed thing”.

It alters small things (especially when you als use the same “reference id” for an existing image), but isn’t as powerful as a seed.

So it’s try and see what the “scrambled text” is doing this time and if it’s a good thing or bad for your image.

The logic behind is, that the more complex / longer that scrambled text is, the more difference the image will be compared to the original one without any scrambled text.

You can alter small things this way, like hands, style, composition, etc… but you never know what comes out…


Redraw the cartoon image with ID gBbLqfGBAFyGcBm3 in the ‘Ligne claire’ style, featuring Tintin and Snowy. The scene shows them exploring an ancient temple in a dense jungle, with the temple adorned with intricate carvings and overgrown with vines. The ‘Ligne claire’ style is characterized by clear, strong lines and bright, uniform coloring, reminiscent of Hergé’s classic Tintin comics. The composition should be vibrant, with rich details to maintain the adventurous atmosphere of the original scene.

This is the original image (the image above, I mean)

I asked for the Generation ID of the first image and told him “create it again, with reference ID xxxxx” (it’s for 99% the same)

I repeated the exact same prompt (as before, so with the reference ID) and added 16 scrambled characters; T@qOb$EWzfVJXi~d. You can see the dog is different as the people in front of the temple, but the image itself is still the same (look and feel)

Same prompt, reference ID, etc… but with 32 characters; JZsNKAXGUFoaDgiyntbCoeinsxtVWBLv. Interesting is that it is now full widescreen (which I always wanted, but it never did).

Finally, 64 characters added; 4KYtgPi+S0wW6a$6jm6K5=_!fGI8@zk3p2KNjsKzZ)wvY=h_FFb$lH)*D=aPjFSG. Tintin decided to walk away, lol…

For those interested I can place the “create scrambled text script” once again here (I did optimize some things, but it’s still PHP / JS / CSS in the end).

You can’t predict the outcome, so it isn’t as powerful as the seed (but it’s all we have at the moment).

Yes, I understand the seed versus no seed dilemma as that is how we used to get focused iteration in images. However, the addition of noise to the gen_id (which based off of our current understanding wraps all meta data) creates randomness in the image which is typically seen as undesirable.

In the interest of trying to nail down how these things all relate to each other, I have noticed that the creation of randomness in the image (via gen_id noise) acts in the same way that changing the seed slightly, while using an identical prompt, used to in the past. So, would it be reasonable to assume that the inclusion of additional noise on the back end of the gen_id is altering the seed in the same way we used to do it manually or am I way off now?

As a test of this can you use your original prompt and gen_id (unadulterated) for the very first test image above and simply add the following to the end of the prompt and let us know how it goes?:

Tintin is walking away.