Yeah, I am the only person in the world having this issue, lol.

The image you created is indeed the original image I started with before the update of the system.

But when I use your code (which is “my” code) I never get the same seed (and image) in return.

It simply says can not use that (a) seed.

BTW, ChatGPT seems unwilling to generate the image in one reply.

In order to reproduce your issue, I tried 3 times. Fortunately, it eventually generated that image in one reply.

Lucky you, I got al sort of images, but not the original.

I just wanted the original image and replace the “short” with a “jeans”.

After all I am testing for 24 hours now and I never got that first image back… it keeps changings the seeds.

Think we are up to something.

dall-e.md >> this update is 4 days old and shows the seed parameter (which is not working anymore).

all_tools.md >> this is updated 4 hours ago and in the most recent iteration, it has added the referenced_image_ids as a valid parameter.

Both at the bottom of the page.

Update, they are really changing things (see .git repo’s in the previous thread), but also in the formatted JSON data.

n = 2 is not being used and now the multi dimensional array has separate key <==> values pairs.

[
  {
    "gen_id": "YbZ7vyPfPtAN3lp1",
    "prompt": "(some text) The image was taken with an old analog camera, making it slightly out of focus and grainy. Wide, landscape mode.",
    "seed": 70155904
  },
  {
    "gen_id": "gjsDYTIGlEvATWrQ",
    "prompt": "(some text) The image was taken with an old analog camera, making it slightly out of focus and grainy. Wide, landscape mode.",
    "seed": 1409843189
  }
]

Okay, things are clearing out.

Imagine this image as our start point;

If you want this image “but with a twist”, simply refer to it’s gen_id and change the seed.

But if you want the style / atmosphere, etc… and with a bigger change, add that change in your prompt and refer to the gen_id as the base image.

E.g. “show two woman with flip flops”;

As you can see the woman in the first picture (red dress) is also (kind of) available in the second image. And the second lady is swapped with the first man (same glasses / white cloths).

But the look and feel, park, stones, background, etc… are the same (atmosphere / style) because of the referred image.

You can now take the second picture as a starting point and use that unique gen_id for further iterations.

E.g. “swap the stones on the ground with grass”;

This is, somehow, the same image but now situated on a grassfield.

{
  "size": "1792x1024", // dimension of the image
  "prompt": "(text)", // text that describes the scene
  "n": 1, // number of wanted images
  "referenced_image_ids": ["Mrwfxmlt4MheddKh"], // the #ID of the original image
  "seeds": [3091385326] // the seed of this particular image
}

Choice matrix

  1. Want a complete different image?
    Write a complete new prompt.

  2. Want the same image style / concept, but with a twist?
    Refer to the gen_id and force another seed.

  3. Want the same image, but with some other details?
    Refer to the gen_id and write the changes in the original prompt.


Tip

When you refer to a gen_id, the same approach of that particular image will be used and also the same parameters (like prompt, dimensions and also seed).

So when you refer to a gen_id an say nothing about another seed, Dall-E will use the exact seed of the referred image (you don’t have to add it yourself).

3 Likes

Can you now reproduce the original image?

Very strangely, I haven’t gotten any update events…

Well, hope is gone.

I can recreate an existing image, by using it’s gen_id. But than you get the exact same image, which is not what I want… I mean; I do have that image already.

What I want is to take “a picture” as a reference and make iterations on it. Sometimes by a different seed and sometimes by altering the prompt (while keeping the style the same; that’s where the gen_id comes in).


The day before yesterday everything worked okay (the old system, before the update), than it was fckd up, and yesterday “I did have a workflow that worked” (see my decision matrix in an earlier thread).

Today I am testing and… the freaking (I hope this word is allowed) did remove the support for seeds.

Example

Take this for an example; “create a fluffy Furby lit by fluor light”.


{
  "size": "1024x1024",
  "prompts": [
    "A photo of an extremely fluffy Furby toy from the '90s, perched on a table that is illuminated by a fluorescent tube light. The camera angle is close to the table, utilizing a Dutch Tilt to create a canted angle, contributing to the dynamic composition. This photo emulates the style of film photography from the nineties, with a nostalgic, analog feel, characterized by graininess, warmer color tones, soft focus, and the unique quirks of film photography, including some technical glitches that add to the vintage atmosphere."
  ],
  "gen_id": [
    "0NWWOSiHmwzx8wD3",
    "cM8c8LYuwBoqCReW"
  ],
  "seeds": [
    1512334761,
    3307475022
  ]
}

So far so good, now take image cM8c8LYuwBoqCReW (the second image from above) and place two Furbies on the table.

{
  "size": "1024x1024",
  "prompts": [
    "A photo showing two extremely fluffy Furby toys from the '90s, side by side on a table that is illuminated by a fluorescent tube light. The camera angle is close to the table, employing a Dutch Tilt for a canted, dynamic composition. The image should have the nostalgic, analog feel of film photography from the nineties, complete with graininess, warmer color tones, soft focus, and typical film photography quirks, including a few technical glitches that add to the vintage charm."
  ],
  "referenced_image_ids": ["cM8c8LYuwBoqCReW"],
  "gen_id": ["RgcrkBPCAs695iyW"],
  "seeds": [3307475022]
}

Okay, it looks like the second image was referenced and I do see two Furbies (not sure if the image is really taken as a starting point, I mean… what can you expect? At least the camera angle and background are not the same, but the “look and feel” are).

Than I tested like "create image with ID “RgcrkBPCAs695iyW” once again, but this time you have to use the seed “3307475023”.

{
  "size": "1024x1024",
  "prompts": [
    "A photo showing two extremely fluffy Furby toys from the '90s, side by side on a table that is illuminated by a fluorescent tube light. The camera angle is close to the table, employing a Dutch Tilt for a canted, dynamic composition. The image should have the nostalgic, analog feel of film photography from the nineties, complete with graininess, warmer color tones, soft focus, and typical film photography quirks, including a few technical glitches that add to the vintage charm."
  ],
  "referenced_image_ids": ["RgcrkBPCAs695iyW"],
  "gen_id": ["tSLideS8urnRDTr1"],
  "seeds": [3307475023]  // Please note that the seed used here is the same as the previous generation due to system limitations; it does not allow specifying a new seed number.
}

OMG! It is the exact same image because the seed was not used, and why?!

Because ChatGPT has mentioned it already in the comments (the thing knows where I am up to, I am afraid);

Please note that the seed used here is the same as the previous generation due to system limitations; it does not allow specifying a new seed number.

Than I asked why he is doing this and what the code was for the last image;

Show me the parameters you did send to Dall-E for the last image.

{
  "size": "1024x1024",
  "prompt": "A photo showing two extremely fluffy Furby toys from the '90s, side by side on a table that is illuminated by a fluorescent tube light. The camera angle is close to the table, employing a Dutch Tilt for a canted, dynamic composition. The image should have the nostalgic, analog feel of film photography from the nineties, complete with graininess, warmer color tones, soft focus, and typical film photography quirks, including a few technical glitches that add to the vintage charm.",
  "referenced_image_ids": ["RgcrkBPCAs695iyW"]
}

It didn’t even send the seed and did remind me once again;

Please note that while you requested to use a new seed, the Dall-E API currently does not support specifying seeds for generation. Therefore, the same seed as the previous generation was used due to system limitations. If there’s anything else you need, please let me know.

Now I can regenerate an exact image by it’s gen_id and use that as a base for iterations. But the same image with a slightly different approach is not possible, because the seed is not supported.

Force it!

When I try to force the system, it crashes;

{
  "size": "1024x1024",
  "n": 1,
  "prompt": "A photo showing two extremely fluffy Furby toys from the '90s, side by side on a table that is illuminated by a fluorescent tube light. The camera angle is close to the table, employing a Dutch Tilt for a canted, dynamic composition. The image should have the nostalgic, analog feel of film photography from the nineties, complete with graininess, warmer color tones, soft focus, and typical film photography quirks, including a few technical glitches that add to the vintage charm.",
  "seed": [1075358085]
}

It seems there was an error in generating the image with the specified parameters. Let’s try again. I will send the same parameters to Dall-E once more to attempt to create the image you requested.

Try it again.

I’ll resend this data to the system and update you shortly. It appears that the error persists, and we are currently unable to generate the image with the provided parameters and seed. This could be due to a temporary issue with the image generation service.

Do it.

Unfortunately, at this moment, there isn’t a way for me to resolve this directly. I would recommend trying again later when the service may be back to normal operation. If you have any other requests or need assistance with something else, please let me know!

References

At least the references do work, or it looks like? I am not sure which ‘exact copy’ you can expect by using this parameter.

I first created this image and than referenced it and modified it by placing a mask on one of the two Furbies;

This looks okay-ish, I guess? Same style, greenish light, somehow the same Furby on the left.

Of course I did try it again, this is another original image;

The camera in the picture comes back, so that’s because of the reference, I guess? But it would be nice to try another seed for the last one, because I don’t like some things.

And seed is not supported any longer.

Another try

This is the original image;

Okay, add an Anonymous mask;

Now give them Gremlin ears;


So iterate is okay, but seed not supported.

  1. Up until now, I still cannot use referenced_image_ids. Perhaps I am using “the old system” you mentioned.

    In “the old system”, if I want to make iterations on an original image, I usually modify its prompt (e.g. add some random spaces or numbers) rather than the seed. In my experience, seeds are used to fix the style, modifying the seed often breaks the existing style.

    The drawback of this approach is that

    • Making big changes to the prompt might break the existing style, even if using the same seed.
    • It’s inconvenient to combine two characters with the same seed. You have to copy-paste two characters’ descriptions to one prompt.
    • It’s impossible to combine two characters with different seeds.
  2. Now you said that “the new system” supports “reference images”. This is good, because it seemly could fix the drawback I mentioned above.

    So now you can recreate an existing image via

    • If the prompt of that existing image does not contain referenced_image_ids (i.e. initial image) , use the same size, prompt and seed.
    • if the prompt of that existing image contains referenced_image_ids, use the same size, prompt and referenced_image_ids.

    is that right? This sounds logical, because now referenced_image_ids plays the same role of the seed in “the old system” but more flexible. (If you want to make iterations on an original image, just add some random spaces or numbers in the prompt.)

    Anyway, this is just my guess. I haven’t got the update event. WHY???

  3. One thing I’m concerned about is that after the update, won’t I be able to reconstruct existing images using the same seeds and prompts from “the old system”?

    I have some nice images generated by DALL-E 3 which I saved the prompts and seeds locally (some of the sessions have been deleted). If “the new system” can’t further iterate on these images, these images will become meaningless (at least for DALL-E 3, it doesn’t support uploading images ).

Yeah, it is somehow a mess.

The reference image is a cool concept, because it is a container for all the characteristics of an image.

The prompt, style, seed, dimension, colors, camera angle, etc… is all ‘saved’ inside that reference ID.

So if you like an image and want to change some details (but still keep the original approach of the image) you can reference it and see what comes out.

But I also want the seed option.

Because sometimes an image is okay, but you want two of the same kind. Or sometimes it is okay, but you want it “just a little bit different”.

That’s where the seed comes in.

And I can not recreate my image before the update, even not when I use the exact prompt (or seed, but the seed is not supported at all now).

I can test for you if you want; give me an exact prompt / seed and I will see what comes out.

To be clear : the seed parameter is not supported at all (since today, and at my side).

Dall-E still uses it (it gives me all the seed numbers for every image), but when I say “use the exact seed” or “use another seed” (with the same prompt), it simply says “can’t do” (and crashes).

OK.

I just created an image, you can try it.

Just copy paste and don’t change anything.

Send this JSON data to the image generator, do not modify anything. After generating an image, show me the JSON data that the image generator returns to you.
```
{
  "size": "1024x1024",
  "prompts": [
    "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder."
  ],
  "seeds": [3075182356]
}
```

Let me see what comes out.


Then you can fine-tune the image by adding some random strings.

Something like

Send this JSON data to the image generator, do not modify anything. After generating an image, show me the JSON data that the image generator returns to you.
```
{
  "size": "1024x1024",
  "prompts": [
    "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder.  510F749a81123"
  ],
  "seeds": [3075182356]
}
```

Note that the spaces and the random numbers 510F749a81123 is just for fine-tuning. This is better than using a new random seed which breaks the style (some images are still pretty good, but they’re never the same beast anymore).

Let start simple, by executing one example first.

I did send your complete first code and asked (in my custom instructions) to show the exact used code before generating the image itself;

{
  "size": "1024x1024",
  "prompts": [
    "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder."
  ],
  "seed": 3075182356
}

Okay, so the code above is what I requested and what he is going to use.


I asked for ONE image and a SPECIFIC seed, but I got two images with totally different seeds;

{
  "gen_id": "ry0SAvdbI46jl4fV",
  "prompt": "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder.",
  "seed": 2275396183
}

{
  "gen_id": "gTGgythSTN6t6PPi",
  "prompt": "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder.",
  "seed": 2304176889
}

It also did return the gen_id or the generated image ID, which you can use to refer to (but only in the same session.

You can never re-create an image by prompt / seed and also not by gen_id in a different (or older) session.

Finetuning

I want to refer to image ‘gTGgythSTN6t6PPi’.

Take that exact image and make it landscape, do refer ‘gTGgythSTN6t6PPi’.

Show me the JSON parameters before and after the image creation.

CODE SEND

{
  "size": "1792x1024",
  "prompts": [
    "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder."
  ],
  "referenced_image_ids": ["gTGgythSTN6t6PPi"]
}

CODE RETRIEVED

{
  "gen_id": "9HpCrnHWt3rPIKY3",
  "prompt": "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder.",
  "seed": 2304176889
}

This image is the same (reference ID) as the second one, but converted from square to landscape.

I did the exact same for the first image (use reference ID, make it landscape).

As you can see, it works perfect. The beast / camera / surroundings are almost kept intact.

But you can not use another seed, which I dislike (because I like the concept of seeding).

Last try

I asked to take the last image as the reference and let the monster / beast move away from the camera instead.

But it doesn’t do that, because it is referencing (strictly) to the base image where the beast walks towards the camera.

Than I asked to change the camera to high angle, but still reference to the last image.

Firstly

Sorry, I found that there are some syntax mistakes in the above example (but ChatGPT can tolerate the mistakes, so the final result is unchanged).

I just wrote

Send this JSON data to the image generator, do not modify anything. After generating an image, show me the JSON data that the image generator returns to you.
```
{
  "size": "1024x1024",
  "prompts": [
    "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder."
  ],
  "seed": 3075182356
}
```

This is wrong. The correct syntax should be

Send this JSON data to the image generator, do not modify anything. After generating an image, show me the JSON data that the image generator returns to you.
```
{
  "size": "1024x1024",
  "prompts": [
    "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder."
  ],
  "seeds": [3075182356]
}
```

The correct format is "seeds": [3075182356] rather than "seed": 3075182356.

Anyway, the result are the same (So I edited the reply).

  1. Seed: 3075182356

    Send this JSON data to the image generator, do not modify anything. After generating an image, show me the JSON data that the image generator returns to you.
    ```
    {
      "size": "1024x1024",
      "prompts": [
        "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder."
      ],
      "seeds": [3075182356]
    }
    ```
    

  2. Seed: 3075182356 + random string 510F749a81123 (make it just a little bit different)

    Send this JSON data to the image generator, do not modify anything. After generating an image, show me the JSON data that the image generator returns to you.
    ```
    {
      "size": "1024x1024",
      "prompts": [
        "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder.  510F749a81123"
      ],
      "seeds": [3075182356]
    }
    ```
    

Secondly

So… can I understand that after the update you no longer have the ability to reproduce any images?

For example, if someone give you the exact prompt and seed (like this example), you have no way to reproduce the image on your machine?

Thirdly

I can reproduce the two images in my computer, just using size, seed and prompt.

For the seed: 2275396183 as an example.

Could you retry the example by "seeds": [3075182356] instead of "seed": 3075182356 ?

I think that not being able to recreate the image from the seed is a bug, as it hinders the potential for collaborative development. If you ask me to fine-tune a certain image, do I have to access your account and enter a specific session? That’s impossible. – I know that ChatGPT can share session link, but what if I just want to share one image?

I am afraid so.

I just did re-test your code, given in your reply;

Use this code, don’t change any parameter.

{
  "size": "1024x1024",
  "prompts": [
    "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder. "
  ],
  "seeds": [3075182356]
}

Than it said “It seems there was an error in generating the image. Let's try again.

The second try it said “It appears there's a persistent issue with generating the image using the provided parameters”.


Than I used the exact same code once again, but without the seed;


That did work, so even when sending a seed, the system crashes.

For your reference, this is the data I got back;

[
  {
    "gen_id": "durkRYUfwnFSdmeL",
    "prompt": "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder.",
    "seed": 4025466874
  },
  {
    "gen_id": "SA3I5Mt94q2czJcp",
    "prompt": "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder.",
    "seed": 2083201434
  }
]

IDEA

Now the interesting part is… can YOU create the last image in your system with those parameters?

  {
    "size": "1024x1024",
    "prompt": "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder.",
    "seed": [2083201434]
  }

By the way, I don’t think the seed number has to be encapsulated in brackets or even quotes.

It’s a digit, not a string (so no need for quotes) and the brackets are only when used for an array, which is not the case by a single integer.

Dall-E sees if you want more items (array, comma separated) of single instance.

Yes, I can.

In my system, the type of the seed parameter is an array.

To clarify,

The JSON format input to DALL-E 3 is

{
  "size": "1024x1024",
  "prompts": [
    "YOU DESCRIPTION 1", "YOU DESCRIPTION 2", "YOU DESCRIPTION 3"
  ],
  "seeds": [SEED1, SEED2, SEED3]
}

The JSON format output from DALL-E 3 is

{
  "gen_id": "GEN_ID",
  "prompt": "RETURN_DESCRIPTION",
  "seed": RETURN_SEED
}

Okay, so you can grab data from the NEW model and execute it in the OLD model.

But it can’t be done vice versa, because the NEW model does not execute the OLD data (with a seed).


// For the API (if they did it good) this is the same;

"seeds" : 43289324890

// or this;

"seeds" : [ 43289324890 ]

// and even this;

"seeds" : [ "43289324890" ]

// or with more entries;

"seeds" : [ 43289324890, "324893428904", 3248439280 ]

All values are valid, because the phyton script will try to see what the intent is.

Is it an array of seeds? A single seed? A numeric seed? A string seed? Etc…

At the end it will parse the value, if valid.

Sometimes I try to trick ChatGPT in order to see if my code is good.

So I ask something like “`use this code, correct when wrong’”;

"seeds" : 3298238904,
"bananas" : [ 'yummy', 'delicious' ]

Than it removes bananas (and keeps the seed) , so I know the code-testing is done correctly.

But now it is removing both bananas AND the seed, because it does not support seeding.

Recreating a specific image using seed may not be 100% safe.

The following two images were generated twice in different sessions, using the same size, prompt, seed.

Comparing them at the pixel level, will find they are slightly different. Perhaps the DALL-3 model will change over time (e.g. online learning).

That is called “numeric noise”, as an equivalent to analog grain.

I have seen the same when using FileDiff software at pixel level

The biggest problem is (in the new system) that you can never, ever recreate an existing image after some time.

Even you use the exact same prompt, a different seed is attached and you can not change it.

Even in the same session, when it is bigger than 32k tokens, the memory in that session is reset.

Resulting in not being able to finetune any exciting image.

1 Like