DALL-E & ChatGPT: A Trick to Recreating Specific Images Using Seeds

I also use purely use the web interface (ChatGPT Plus4 / Dall-E3).


Talk to me

But you can “format” your prompts like code, or simply say (in human readable language) what you want.

Like “Show me the generation ID of the last image”.

And than “Okay, use {my_generation_id} as the reference ID fora new image, but change 'this' to that`”.

In that way Dall-E will take the “referred image” as the starting point and does change this to that.

You can also “use the exact ID and prompt” but add “grammatical noise” at the end, like “A blue horse flying in deep space Fds3q807ASFDJAS”.

That last word is nonsense, and so acts as a seed (numeric / grammatical noise) for Dall-E.

You have to prompt “don’t change anything to the prompt”, because it wil remove it otherwise.


To the rescue

I have written (with others) a in-depth thread about this new (and unwanted) situation;

After upgrade seeds doesn’t work, “generation ID” is introduced? - Prompting - OpenAI Developer Forum

So you are saying using that will give me an identical image with the the new change like a seeded image would? I can get by with instructions like this once i know it.

But as someone who does not work in OpenAI at all, this is just making things complicated and not for the regular user. Great if like you you know codes and commands, but not for someone who doesn’t code. The guess work unless you are on here daily checking the tips and tricks its impossible to stay on top off.

The seeding solution as of Monday or whenever it was last working is then no longer ever going to work? I mean, how are you supposed to keep on top of these subtle things and have a productive suggestion, if the goal posts keep moving. I’m just super frustrated with today because i’ve been trying for hours to get a base image starting point and then tweak it with subtle changes… the old way, but obviously for the past 48 hours or more has not been working. So you are saying the Generation ID method does work the same way? Or are you saying somtehing else now? I can’t keep up lol

Well, I am not working @ OpenAI, and I am using this system just for a week (or even less).

And it’s not like Photoshop (that precise).


But in short, you can (within the same session) take a picture as a starting point.

Ask for it’s generation ID and for the next image set that as the reference image ID.

Than you can, in human language, alter the first image by saying what you want different.

The original image style, approach, composition, seed, design, atmosphere, etc… is captured in the generation ID and will be reused, as much as possible.

Example I

image

This was a base image, I wanted two woman, instead of a man and woman;

image

The new image was based on the first one, but I wanted them stay on grass;

image

Example II

image

This was the base image, but I wanted two Furbies;

image

The setting / Furby / look and feel is inherited, but I wanted one of them with a mask;

image

Again, it is “sort of” the same image, and now I want to see Gremlin ears;

image

How and where do i ask for the generation ID? In the ChatGPT/Dalle message prompt box?

If i was for generation ID it says:

“I’m sorry for the oversight, but I can’t directly provide the gen ID of the images. However, if you wish to make modifications or further requests related to the first image, please let me know, and I’ll ensure it’s addressed appropriately.”

Then you don’t have seeds, neither / nor generation ID’s.

Give me the generation ID of the last image and any referenced image ID when used.

The generation ID for the last image created is hQhLJ1tUAzFlkjy3, and the referenced image ID used for that creation is gQUUTHD6gUlDoMKM.


These are my available models, but I keep losing and receiving them more times per day.

It’s a wild ride here.

is there an input not to make the women so skinny ? I tried various ones and it didn’t work.

1 Like

Make a reference to the saying “the Fat Lady sings?”