DALL-E & ChatGPT: A Trick to Recreating Specific Images Using Seeds

Welcome to our community! It’s important to note that the DallE 2 API does not support including a ‘seed’ in the post request. OpenAI has recently introduced DallE 3 to ChatGPT, and there individuals are exploring the use of JSON format for asking ChatGPT to reproduce the images with a seed value.

seed value defines the similarity of the image.

Prompt like this :

Create an image with the parameters below this line of text, before creating the image show me the parameters you are going to use and do not change anything.
Send this JSON data to the image generator, do not modify anything. If you have to modify this JSON data, please let me know and tell me why in your reply.
"size": "1024x1024",
"prompts": [
"A Red Eagle.".
"seeds": [3172394258] }
1 Like

After the update this if not working anymore.

It now uses reference IDs and seems to ignore any used seed.

See this thread;

1 Like

The API doesn’t accept those parameters. It’s a trick specific to the version at https://chat.openai.com/

This morning, I received the update and successfully tested the PDF upload. Although I wanted to conduct more tests, I was short on time.

Strangely, by the evening, my panel had reverted to the prior version without the update, so I can’t continue testing right now.

I’ve experienced this previously. There was an update that let me access websites using ChatGPT, but it held for merely a day.

Could be you saw a live test of something being planned for the upcoming OpenAI Dev-Day.

Yeah, both performance and functionality of the site is going up and down like a broken rollercoaster.

I can’t even use seeds any longer since the ‘update’.

The (Android) app crashes when you open an earlier chat “done with the new model” (because the app isn’t updated) from the history panel.

So maybe that’s why they reverted it; it’s buggy as hell and they did receive a lot of crash-reports by the google firebase crashlytics (a log system for this type of crashes).

Thanks! Works on the web, but an API call with the same prompt (shorter, as the prompt has to be < than 1000 characters) doesn’t respect the seed when rerun.

Hello! The ‘seeds’ will work, somewhat, but I found a formula today that has worked finally for me. Basically, I asked Chat to tell me WHY the DallE was having issues giving me consistent images, even with the same prompts. I then had it teach me the best way for it to remember the context/info/style/etc. in a new chat another day. This is what it instructed me to do. So far so good, only compliant is that I have to do it one chat at a time because “content guidelines” (DAN/jailbreak only work for the first generation in the chat for me anyways before i get booted. )
Format for context in each prompt. This is how I am getting consistency for mine.

" 1. Color Palette:

  • Primary Colors: Harmonious shades of pink and green.
  • Pink: A soft, pastel pink that brings out the feminine and elegant vibe of the brand.
  • Green: A muted, pastel green that resonates with the cannabis theme, signifying growth and freshness.

2. Elements & Icons:

  • Cannabis Leaves: Elegant and refined cannabis leaf designs that are subtly incorporated into various elements without being overpowering.
  • Digital Marketing Symbols: Modern icons like laptops, smartphones, and social media logos (Instagram, Facebook, Twitter, Pinterest) that tie into the digital nature of the brand.

3. Artistic Touches:

  • Background Designs: Use delicate and sophisticated artistic patterns in the backgrounds. These can be inspired by natural elements, florals, or abstract designs that match the brand’s elegance.
  • Borders & Dividers: Soft, flowing lines with occasional hints of the cannabis leaf motifs, ensuring a cohesive and integrated design.

4. Typography:

  • Stick to fonts that are sleek, modern, and easy to read. A combination of a script font for headings (to bring out the brand’s elegance) and a sans-serif font for body text works well.

5. Imagery:

  • Whenever using photos or images, opt for ones that have a pastel or muted color scheme to match the overall aesthetic. Images should evoke feelings of creativity, elegance, and professionalism.

6. Layout & Composition:

  • Balance: Ensure a balanced layout with a mix of images, text, and white space.
  • Hierarchy: Important elements, like headlines or call-to-action buttons, should be prominently placed.
  • Consistency: Maintain consistency in the placement of logos, icons, and other brand elements across different assets.

EXAMPLE PROMPT: " **-**Square vector design of a featured article section inspired by modern digital marketing aesthetics, intertwined with harmonious pinks and greens. The layout showcases a placeholder for an image, a headline in pink, and a short introduction text. Interspersed with the buttons are elegant cannabis leaf designs and digital marketing symbols, maintaining the brand’s elegant and girly aesthetic. The section is adorned with elements that signify creativity and branding, capturing the essence of the cannabis industry’s vibrancy and legacy."

I am posting the full Case Study today, stand by if you want to read the in detail and see the images generated it’ll be up on my behance later today (cant add link so gotta find me there @theherbalcreative anyways yall are amazing and so smart! love reading all of this! i will try to add an image here to show in case you never make it to see the full project.

1 Like

Recreating a specific image using seed may not be 100% safe.

The following two images were generated twice in different sessions, using the same size, prompt, seed.

{
  "size": "1024x1024",
  "prompts": [
    "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder. "
  ],
  "seeds": [3075182356]
}

Comparing them at the pixel level, will find they are slightly different. Perhaps the DALL-3 model will change over time (e.g. online learning).


Edit (2023/11/4):

There is no way to recreate the image using this approach.

Detail see After upgrade seeds doesn't work, "generation ID" is introduced? - #61 by cassiellm

1 Like

Seeing doesn’t seem to be working correctly today or yesterday. Using a seed and prompt and regenerated it 3 times, i get 3 different results!! Is there some sort of bug or technical issue with seeding lately or images in general as its consistently deviating from clear prompts even when you tell it not to deviate.

After the update, you can not set the seed any longer.

Dall-E will use a seed and shows this in the output (if you ask for it), but you can not set it.

You can refer to the gen_id (generation ID) with the new parameter referenced_images_ids.

Seed it yourself

Also you can postfix your prompt with a random string of text, which acts (somehow) as a seed.

Tomorrow I am going to program a simple application myself in PHP to generate “prompt seeds” for this purpose.

If OpenAI is taking away my toys, I will develop them myself.

1 Like

You are talking a foreign language with your commands talk. I am a photographer and graphic designer! I don’t have access to APIs or anything of that kind. I just use the web based IU for ChatGPT.

When i would find a base image then seed it by assigning a number , i.e. Seed 1 (copying the base image’s prompt giving by Dalle) if i got a result and then regenerated it 3 times, it would show me 3 identical images. If I try that same process today, it doesn’t give three identical images. that’s my point.

1 Like

I also use purely use the web interface (ChatGPT Plus4 / Dall-E3).


Talk to me

But you can “format” your prompts like code, or simply say (in human readable language) what you want.

Like “Show me the generation ID of the last image”.

And than “Okay, use {my_generation_id} as the reference ID fora new image, but change 'this' to that`”.

In that way Dall-E will take the “referred image” as the starting point and does change this to that.

You can also “use the exact ID and prompt” but add “grammatical noise” at the end, like “A blue horse flying in deep space Fds3q807ASFDJAS”.

That last word is nonsense, and so acts as a seed (numeric / grammatical noise) for Dall-E.

You have to prompt “don’t change anything to the prompt”, because it wil remove it otherwise.


To the rescue

I have written (with others) a in-depth thread about this new (and unwanted) situation;

After upgrade seeds doesn’t work, “generation ID” is introduced? - Prompting - OpenAI Developer Forum

So you are saying using that will give me an identical image with the the new change like a seeded image would? I can get by with instructions like this once i know it.

But as someone who does not work in OpenAI at all, this is just making things complicated and not for the regular user. Great if like you you know codes and commands, but not for someone who doesn’t code. The guess work unless you are on here daily checking the tips and tricks its impossible to stay on top off.

The seeding solution as of Monday or whenever it was last working is then no longer ever going to work? I mean, how are you supposed to keep on top of these subtle things and have a productive suggestion, if the goal posts keep moving. I’m just super frustrated with today because i’ve been trying for hours to get a base image starting point and then tweak it with subtle changes… the old way, but obviously for the past 48 hours or more has not been working. So you are saying the Generation ID method does work the same way? Or are you saying somtehing else now? I can’t keep up lol

Well, I am not working @ OpenAI, and I am using this system just for a week (or even less).

And it’s not like Photoshop (that precise).


But in short, you can (within the same session) take a picture as a starting point.

Ask for it’s generation ID and for the next image set that as the reference image ID.

Than you can, in human language, alter the first image by saying what you want different.

The original image style, approach, composition, seed, design, atmosphere, etc… is captured in the generation ID and will be reused, as much as possible.

Example I

image

This was a base image, I wanted two woman, instead of a man and woman;

image

The new image was based on the first one, but I wanted them stay on grass;

image

Example II

image

This was the base image, but I wanted two Furbies;

image

The setting / Furby / look and feel is inherited, but I wanted one of them with a mask;

image

Again, it is “sort of” the same image, and now I want to see Gremlin ears;

image

How and where do i ask for the generation ID? In the ChatGPT/Dalle message prompt box?

If i was for generation ID it says:

“I’m sorry for the oversight, but I can’t directly provide the gen ID of the images. However, if you wish to make modifications or further requests related to the first image, please let me know, and I’ll ensure it’s addressed appropriately.”

Then you don’t have seeds, neither / nor generation ID’s.

Give me the generation ID of the last image and any referenced image ID when used.

The generation ID for the last image created is hQhLJ1tUAzFlkjy3, and the referenced image ID used for that creation is gQUUTHD6gUlDoMKM.


These are my available models, but I keep losing and receiving them more times per day.

It’s a wild ride here.