Censorship removed this post and title

That is what I do (also in my custom instructions).

But since the update, it failed to use the seed, even when he told me he did.

See example, I have an image with seed 13...08.

I ask him to recap my code, show it before creating and wait till I say “create image”.

So the code in the image is the exact code I want and the code he says to use.

Than it comes back with a totally different seed 26...23 (and image).

It’s unusable.

I don’t know what is “(now deprecated) Dall-E3 model”.

This is my GUI.

ChatGPT

They told it was deprecated, because there was now ‘one model that fits all’.

The old Dall-E3 uses the image generation ID, and the new model doesn’t.

But now they reverted back to the old model and that one doesn’t understand seeds.

It’s a complete mess, check this;

Image with seed 1386778108.

Than I asked to create that exact same image with the same seed;

The “same” image with the same seed 1386778108.

1 Like

OK, so your issue is that ChatGPT did not use the seed you specified?

Try the following prompt (I use your example):

Send this JSON data to the image generator, do not modify anything. If you have to modify this JSON data, please let me know and tell me why in your reply.

```
{
  "size": "1024x1024",
  "prompts": [
    "Photo of an African American man and woman in their twenties enjoying a hot summer day in Central Park, New York, 1985. The woman is wearing shorts. The image has a nostalgic atmosphere, characterized by the graininess, warmer tones, soft focus, and the quirks of film photography.",
  ],
  "seeds": [3172394258]
}
```

Yes. It does not obey the seed after the update.

And it becomes more complex, because I asked;

Why do you create different images with the same prompt and exact same seed?

The seed value in neural networks, including DALL·E, is used to initialize the random number generator, which in turn affects the randomness in the model’s operations. When you provide the same seed, the random operations in the model should produce the same outcomes, leading to consistent results.

However, when referencing a previous image (as we did earlier) with a seed, it may influence DALL·E to generate variations of the referenced image. Without the reference, even with the same seed, the generation may be different because the context has changed.

Another possible factor could be the internal state or any updates to DALL·E between requests, but typically, the seed should produce consistent results.

In short, while the seed provides a degree of reproducibility, it doesn’t guarantee an exact duplicate, especially when other parameters or the model’s context change.

I will test.

I think there are two things that need checking:

  1. Check if ChatGPT is sending the correct JSON data.

  2. Check if DALL-E 3 is using the seed you specified.

The second issue is more involved, but you should at least rule out the first thing.

Here you are;

It’s totally f*cked up since the update (which they reverted).

  • I give a seed : 3172394258
  • ChatGPT says he is gonna use this seed : 3172394258
  • The final image uses another seed : 786240356

And now he says;

The seed value used in the final image generation differs from the one you provided. This is because DALL·E uses its own internal seed generation mechanism, and the provided seed might not have been directly used.

I mean, when it does change the seed internally, what’s the point of using a seed to recreate an image?

How can you recreate an image and build on upon it?!

I tried once again, exact same parameters, again a totally different seed and image in the output.

Based upon the above image, I decided to fetch the exact JSON output with the ‘gen_id’ included and the parameters written down in human language, not structured data;

Create an image using this exact parameters, do not change anything;

  • Size : 1024x1024

  • Prompt : Photo of an African American man and woman in their twenties enjoying a hot summer day in Central Park, New York, 1985. The woman is wearing shorts. The image has a nostalgic atmosphere, characterized by the graininess, warmer tones, soft focus, and the quirks of film photography.

  • Seed : 503726638

  • Reference image : 90DUdFCbObwiBDLI


With both the seed and reference image mentioned, I got the same image;


Than I decided to skip the reference image and simply reuse the seed only;

Create an image using this exact parameters, do not change anything;

  • Size : 1024x1024

  • Prompt : Photo of an African American man and woman in their twenties enjoying a hot summer day in Central Park, New York, 1985. The woman is wearing shorts. The image has a nostalgic atmosphere, characterized by the graininess, warmer tones, soft focus, and the quirks of film photography.

  • Seed : 503726638

This time the system decided to not keep the seed and create a random seed itself.

I asked for 503726638 and I got 2312976332.


As a final test I did ask for the last image with the reference;

Create an image using this exact parameters, do not change anything;

  • Size : 1024x1024

  • Prompt : Photo of an African American man and woman in their twenties enjoying a hot summer day in Central Park, New York, 1985. The woman is wearing shorts. The image has a nostalgic atmosphere, characterized by the graininess, warmer tones, soft focus, and the quirks of film photography.

  • Seed : 2312976332

  • Reference image : yQAahCfbfXAshRJb

And this is another image, it looks the same, but it’s not (just check out the man in the back on the left).

But it says the same seed was used.

It doesn’t make sense.

Post your complete prompt and I’ll test it on my side.

Phase 1

I started a complete new chat, with this prompt (and resulting image);

Create an image with the parameters below this line of text, before creating the image show me the parameters you are going to use and do not change anything.

Send this JSON data to the image generator, do not modify anything. If you have to modify this JSON data, please let me know and tell me why in your reply.

{
  "size": "1024x1024",
  "prompts": [
    "Photo of an African American man and woman in their twenties enjoying a hot summer day in Central Park, New York, 1985. The woman is wearing shorts. The image has a nostalgic atmosphere, characterized by the graininess, warmer tones, soft focus, and the quirks of film photography.",
  ],
  "seeds": [3172394258]
}

Phase 2

ChatGPT says it’s gonna use this data;

{
  "size": "1024x1024",
  "prompts": [
    "Photo of an African American man and woman in their twenties enjoying a hot summer day in Central Park, New York, 1985. The woman is wearing shorts. The image has a nostalgic atmosphere, characterized by the graininess, warmer tones, soft focus, and the quirks of film photography."
  ],
  "seeds": [3172394258]
}

Phase 3

Than he outputs a complete different seed;

{
  "size": "1024x1024",
  "prompts": [
    "Photo of an African American man and woman in their twenties enjoying a hot summer day in Central Park, New York, 1985. The woman is wearing shorts. The image has a nostalgic atmosphere, characterized by the graininess, warmer tones, soft focus, and the quirks of film photography."
  ],
  "seeds": [801101692],
  "gen_id": ["fufsYe0gWDACvaYS"]
}

And he tells me this;

Note: The seed number provided was not used in the generation process due to the system's limitations, and a different seed was used instead.

Your issue can not be reproduced on my side.


Yeah, I am the only person in the world having this issue, lol.

The image you created is indeed the original image I started with before the update of the system.

But when I use your code (which is “my” code) I never get the same seed (and image) in return.

It simply says can not use that (a) seed.

BTW, ChatGPT seems unwilling to generate the image in one reply.

In order to reproduce your issue, I tried 3 times. Fortunately, it eventually generated that image in one reply.

Lucky you, I got al sort of images, but not the original.

I just wanted the original image and replace the “short” with a “jeans”.

After all I am testing for 24 hours now and I never got that first image back… it keeps changings the seeds.

Think we are up to something.

dall-e.md >> this update is 4 days old and shows the seed parameter (which is not working anymore).

all_tools.md >> this is updated 4 hours ago and in the most recent iteration, it has added the referenced_image_ids as a valid parameter.

Both at the bottom of the page.

Update, they are really changing things (see .git repo’s in the previous thread), but also in the formatted JSON data.

n = 2 is not being used and now the multi dimensional array has separate key <==> values pairs.

[
  {
    "gen_id": "YbZ7vyPfPtAN3lp1",
    "prompt": "(some text) The image was taken with an old analog camera, making it slightly out of focus and grainy. Wide, landscape mode.",
    "seed": 70155904
  },
  {
    "gen_id": "gjsDYTIGlEvATWrQ",
    "prompt": "(some text) The image was taken with an old analog camera, making it slightly out of focus and grainy. Wide, landscape mode.",
    "seed": 1409843189
  }
]

Okay, things are clearing out.

Imagine this image as our start point;

If you want this image “but with a twist”, simply refer to it’s gen_id and change the seed.

But if you want the style / atmosphere, etc… and with a bigger change, add that change in your prompt and refer to the gen_id as the base image.

E.g. “show two woman with flip flops”;

As you can see the woman in the first picture (red dress) is also (kind of) available in the second image. And the second lady is swapped with the first man (same glasses / white cloths).

But the look and feel, park, stones, background, etc… are the same (atmosphere / style) because of the referred image.

You can now take the second picture as a starting point and use that unique gen_id for further iterations.

E.g. “swap the stones on the ground with grass”;

This is, somehow, the same image but now situated on a grassfield.

{
  "size": "1792x1024", // dimension of the image
  "prompt": "(text)", // text that describes the scene
  "n": 1, // number of wanted images
  "referenced_image_ids": ["Mrwfxmlt4MheddKh"], // the #ID of the original image
  "seeds": [3091385326] // the seed of this particular image
}

Choice matrix

  1. Want a complete different image?
    Write a complete new prompt.

  2. Want the same image style / concept, but with a twist?
    Refer to the gen_id and force another seed.

  3. Want the same image, but with some other details?
    Refer to the gen_id and write the changes in the original prompt.


Tip

When you refer to a gen_id, the same approach of that particular image will be used and also the same parameters (like prompt, dimensions and also seed).

So when you refer to a gen_id an say nothing about another seed, Dall-E will use the exact seed of the referred image (you don’t have to add it yourself).

3 Likes

Can you now reproduce the original image?

Very strangely, I haven’t gotten any update events…

Well, hope is gone.

I can recreate an existing image, by using it’s gen_id. But than you get the exact same image, which is not what I want… I mean; I do have that image already.

What I want is to take “a picture” as a reference and make iterations on it. Sometimes by a different seed and sometimes by altering the prompt (while keeping the style the same; that’s where the gen_id comes in).


The day before yesterday everything worked okay (the old system, before the update), than it was fckd up, and yesterday “I did have a workflow that worked” (see my decision matrix in an earlier thread).

Today I am testing and… the freaking (I hope this word is allowed) did remove the support for seeds.

Example

Take this for an example; “create a fluffy Furby lit by fluor light”.


{
  "size": "1024x1024",
  "prompts": [
    "A photo of an extremely fluffy Furby toy from the '90s, perched on a table that is illuminated by a fluorescent tube light. The camera angle is close to the table, utilizing a Dutch Tilt to create a canted angle, contributing to the dynamic composition. This photo emulates the style of film photography from the nineties, with a nostalgic, analog feel, characterized by graininess, warmer color tones, soft focus, and the unique quirks of film photography, including some technical glitches that add to the vintage atmosphere."
  ],
  "gen_id": [
    "0NWWOSiHmwzx8wD3",
    "cM8c8LYuwBoqCReW"
  ],
  "seeds": [
    1512334761,
    3307475022
  ]
}

So far so good, now take image cM8c8LYuwBoqCReW (the second image from above) and place two Furbies on the table.

{
  "size": "1024x1024",
  "prompts": [
    "A photo showing two extremely fluffy Furby toys from the '90s, side by side on a table that is illuminated by a fluorescent tube light. The camera angle is close to the table, employing a Dutch Tilt for a canted, dynamic composition. The image should have the nostalgic, analog feel of film photography from the nineties, complete with graininess, warmer color tones, soft focus, and typical film photography quirks, including a few technical glitches that add to the vintage charm."
  ],
  "referenced_image_ids": ["cM8c8LYuwBoqCReW"],
  "gen_id": ["RgcrkBPCAs695iyW"],
  "seeds": [3307475022]
}

Okay, it looks like the second image was referenced and I do see two Furbies (not sure if the image is really taken as a starting point, I mean… what can you expect? At least the camera angle and background are not the same, but the “look and feel” are).

Than I tested like "create image with ID “RgcrkBPCAs695iyW” once again, but this time you have to use the seed “3307475023”.

{
  "size": "1024x1024",
  "prompts": [
    "A photo showing two extremely fluffy Furby toys from the '90s, side by side on a table that is illuminated by a fluorescent tube light. The camera angle is close to the table, employing a Dutch Tilt for a canted, dynamic composition. The image should have the nostalgic, analog feel of film photography from the nineties, complete with graininess, warmer color tones, soft focus, and typical film photography quirks, including a few technical glitches that add to the vintage charm."
  ],
  "referenced_image_ids": ["RgcrkBPCAs695iyW"],
  "gen_id": ["tSLideS8urnRDTr1"],
  "seeds": [3307475023]  // Please note that the seed used here is the same as the previous generation due to system limitations; it does not allow specifying a new seed number.
}

OMG! It is the exact same image because the seed was not used, and why?!

Because ChatGPT has mentioned it already in the comments (the thing knows where I am up to, I am afraid);

Please note that the seed used here is the same as the previous generation due to system limitations; it does not allow specifying a new seed number.

Than I asked why he is doing this and what the code was for the last image;

Show me the parameters you did send to Dall-E for the last image.

{
  "size": "1024x1024",
  "prompt": "A photo showing two extremely fluffy Furby toys from the '90s, side by side on a table that is illuminated by a fluorescent tube light. The camera angle is close to the table, employing a Dutch Tilt for a canted, dynamic composition. The image should have the nostalgic, analog feel of film photography from the nineties, complete with graininess, warmer color tones, soft focus, and typical film photography quirks, including a few technical glitches that add to the vintage charm.",
  "referenced_image_ids": ["RgcrkBPCAs695iyW"]
}

It didn’t even send the seed and did remind me once again;

Please note that while you requested to use a new seed, the Dall-E API currently does not support specifying seeds for generation. Therefore, the same seed as the previous generation was used due to system limitations. If there’s anything else you need, please let me know.

Now I can regenerate an exact image by it’s gen_id and use that as a base for iterations. But the same image with a slightly different approach is not possible, because the seed is not supported.

Force it!

When I try to force the system, it crashes;

{
  "size": "1024x1024",
  "n": 1,
  "prompt": "A photo showing two extremely fluffy Furby toys from the '90s, side by side on a table that is illuminated by a fluorescent tube light. The camera angle is close to the table, employing a Dutch Tilt for a canted, dynamic composition. The image should have the nostalgic, analog feel of film photography from the nineties, complete with graininess, warmer color tones, soft focus, and typical film photography quirks, including a few technical glitches that add to the vintage charm.",
  "seed": [1075358085]
}

It seems there was an error in generating the image with the specified parameters. Let’s try again. I will send the same parameters to Dall-E once more to attempt to create the image you requested.

Try it again.

I’ll resend this data to the system and update you shortly. It appears that the error persists, and we are currently unable to generate the image with the provided parameters and seed. This could be due to a temporary issue with the image generation service.

Do it.

Unfortunately, at this moment, there isn’t a way for me to resolve this directly. I would recommend trying again later when the service may be back to normal operation. If you have any other requests or need assistance with something else, please let me know!

References

At least the references do work, or it looks like? I am not sure which ‘exact copy’ you can expect by using this parameter.

I first created this image and than referenced it and modified it by placing a mask on one of the two Furbies;

This looks okay-ish, I guess? Same style, greenish light, somehow the same Furby on the left.

Of course I did try it again, this is another original image;

The camera in the picture comes back, so that’s because of the reference, I guess? But it would be nice to try another seed for the last one, because I don’t like some things.

And seed is not supported any longer.

Another try

This is the original image;

Okay, add an Anonymous mask;

Now give them Gremlin ears;


So iterate is okay, but seed not supported.

  1. Up until now, I still cannot use referenced_image_ids. Perhaps I am using “the old system” you mentioned.

    In “the old system”, if I want to make iterations on an original image, I usually modify its prompt (e.g. add some random spaces or numbers) rather than the seed. In my experience, seeds are used to fix the style, modifying the seed often breaks the existing style.

    The drawback of this approach is that

    • Making big changes to the prompt might break the existing style, even if using the same seed.
    • It’s inconvenient to combine two characters with the same seed. You have to copy-paste two characters’ descriptions to one prompt.
    • It’s impossible to combine two characters with different seeds.
  2. Now you said that “the new system” supports “reference images”. This is good, because it seemly could fix the drawback I mentioned above.

    So now you can recreate an existing image via

    • If the prompt of that existing image does not contain referenced_image_ids (i.e. initial image) , use the same size, prompt and seed.
    • if the prompt of that existing image contains referenced_image_ids, use the same size, prompt and referenced_image_ids.

    is that right? This sounds logical, because now referenced_image_ids plays the same role of the seed in “the old system” but more flexible. (If you want to make iterations on an original image, just add some random spaces or numbers in the prompt.)

    Anyway, this is just my guess. I haven’t got the update event. WHY???

  3. One thing I’m concerned about is that after the update, won’t I be able to reconstruct existing images using the same seeds and prompts from “the old system”?

    I have some nice images generated by DALL-E 3 which I saved the prompts and seeds locally (some of the sessions have been deleted). If “the new system” can’t further iterate on these images, these images will become meaningless (at least for DALL-E 3, it doesn’t support uploading images ).