DALL-E & ChatGPT: A Trick to Recreating Specific Images Using Seeds

It’s amusing to see how many tweets are now being posted about seeds. Yet, not a single one mentions this original post. Some people even claim it’s a new feature of Dall-E.

Ah, the internet age.

2 Likes

2 weeks ago I tried the seed thing but I think ChatGPT+ dalle3 needs to learn more :joy:

Thanks for the post and the amazing prompt idea, Natanael!
I tried this idea myself and posted it on the web. I also referenced your original post on this Forum so you can get the credibility you deserve.

By the way, I couldn’t reproduce that seeds work using this template:

Create an image of X. Use seed 4269.

Seeds only seem to work when using the specific API format you proposed. Correct?
Also asking ChatGPT-V to give you the seed for an image and recreate it using that seed doesn’t work perfectly.

So seemingly you have done a lot of experimenting and came up with the specific approach that works. :heart: Thanks!

1 Like

Thanks man. Forums like this one provided by OpenAI, as well as platforms like Reddit, frequently serve as the breeding grounds for new discoveries. On social platforms, influencers often disseminate these ideas, presenting them as if they were their own. Consequently, their followers end up relying solely on the influencer. That’s my primary concern.

I think the API format is beneficial because it’s what ChatGPT would naturally generate on its own. However, what seems even more crucial is to ask ChatGPT to review the guidelines and to respect the directive of not changing the prompt, provided the guidelines are met, before passing it on to DALL-E.

When ChatGPT is confident that it’s not breaching any rules, its secondary objective becomes user satisfaction. Perhaps this is why the approach is effective.

Using seeds brings consistency to similar images. To achieve this, one can craft a near-identical prompt and use the same seed, making DALL-E’s image creation predictable. Since DALL-E produces the image, this approach works. On the other hand, when an image is fed into GPT-V, it’s transformed into a numerical format, which the model later deciphers. This isn’t the same as having all the components for an exact reproduction.

Nevertheless, if the aim is merely to use the image as a reference, OpenAI’s new update allows users to seamlessly switch between GPT-V and DALL-E within the same interface. But I haven’t had the opportunity to test it out yet.

1 Like

Gem right here: :point_down:

Thanks for the great discovery, Natanael!

1 Like

No kidding. I have seen this structure everywhere as “the” way to prompt ChatGPT.

Congratulations on creating it though. We know the truth.

Can I just say that it’s hilarious that they release Dall-E for ChatGPT and people are resorting to psuedo-objects for control :rofl: If only there was some sort of way to send a direct object to a server… hmm… Some sort of interface with a single, non-opinionated purpose… hmmm…

1 Like

What endpoint are you using? I try https://api.openai.com/v1/images/generations but it doesn’t accept these parameters (seeds and prompts)

Welcome to our community! It’s important to note that the DallE 2 API does not support including a ‘seed’ in the post request. OpenAI has recently introduced DallE 3 to ChatGPT, and there individuals are exploring the use of JSON format for asking ChatGPT to reproduce the images with a seed value.

seed value defines the similarity of the image.

Prompt like this :

Create an image with the parameters below this line of text, before creating the image show me the parameters you are going to use and do not change anything.
Send this JSON data to the image generator, do not modify anything. If you have to modify this JSON data, please let me know and tell me why in your reply.
"size": "1024x1024",
"prompts": [
"A Red Eagle.".
"seeds": [3172394258] }
1 Like

After the update this if not working anymore.

It now uses reference IDs and seems to ignore any used seed.

See this thread;

1 Like

The API doesn’t accept those parameters. It’s a trick specific to the version at https://chat.openai.com/

This morning, I received the update and successfully tested the PDF upload. Although I wanted to conduct more tests, I was short on time.

Strangely, by the evening, my panel had reverted to the prior version without the update, so I can’t continue testing right now.

I’ve experienced this previously. There was an update that let me access websites using ChatGPT, but it held for merely a day.

Could be you saw a live test of something being planned for the upcoming OpenAI Dev-Day.

Yeah, both performance and functionality of the site is going up and down like a broken rollercoaster.

I can’t even use seeds any longer since the ‘update’.

The (Android) app crashes when you open an earlier chat “done with the new model” (because the app isn’t updated) from the history panel.

So maybe that’s why they reverted it; it’s buggy as hell and they did receive a lot of crash-reports by the google firebase crashlytics (a log system for this type of crashes).

Thanks! Works on the web, but an API call with the same prompt (shorter, as the prompt has to be < than 1000 characters) doesn’t respect the seed when rerun.

Hello! The ‘seeds’ will work, somewhat, but I found a formula today that has worked finally for me. Basically, I asked Chat to tell me WHY the DallE was having issues giving me consistent images, even with the same prompts. I then had it teach me the best way for it to remember the context/info/style/etc. in a new chat another day. This is what it instructed me to do. So far so good, only compliant is that I have to do it one chat at a time because “content guidelines” (DAN/jailbreak only work for the first generation in the chat for me anyways before i get booted. )
Format for context in each prompt. This is how I am getting consistency for mine.

" 1. Color Palette:

  • Primary Colors: Harmonious shades of pink and green.
  • Pink: A soft, pastel pink that brings out the feminine and elegant vibe of the brand.
  • Green: A muted, pastel green that resonates with the cannabis theme, signifying growth and freshness.

2. Elements & Icons:

  • Cannabis Leaves: Elegant and refined cannabis leaf designs that are subtly incorporated into various elements without being overpowering.
  • Digital Marketing Symbols: Modern icons like laptops, smartphones, and social media logos (Instagram, Facebook, Twitter, Pinterest) that tie into the digital nature of the brand.

3. Artistic Touches:

  • Background Designs: Use delicate and sophisticated artistic patterns in the backgrounds. These can be inspired by natural elements, florals, or abstract designs that match the brand’s elegance.
  • Borders & Dividers: Soft, flowing lines with occasional hints of the cannabis leaf motifs, ensuring a cohesive and integrated design.

4. Typography:

  • Stick to fonts that are sleek, modern, and easy to read. A combination of a script font for headings (to bring out the brand’s elegance) and a sans-serif font for body text works well.

5. Imagery:

  • Whenever using photos or images, opt for ones that have a pastel or muted color scheme to match the overall aesthetic. Images should evoke feelings of creativity, elegance, and professionalism.

6. Layout & Composition:

  • Balance: Ensure a balanced layout with a mix of images, text, and white space.
  • Hierarchy: Important elements, like headlines or call-to-action buttons, should be prominently placed.
  • Consistency: Maintain consistency in the placement of logos, icons, and other brand elements across different assets.

EXAMPLE PROMPT: " **-**Square vector design of a featured article section inspired by modern digital marketing aesthetics, intertwined with harmonious pinks and greens. The layout showcases a placeholder for an image, a headline in pink, and a short introduction text. Interspersed with the buttons are elegant cannabis leaf designs and digital marketing symbols, maintaining the brand’s elegant and girly aesthetic. The section is adorned with elements that signify creativity and branding, capturing the essence of the cannabis industry’s vibrancy and legacy."

I am posting the full Case Study today, stand by if you want to read the in detail and see the images generated it’ll be up on my behance later today (cant add link so gotta find me there @theherbalcreative anyways yall are amazing and so smart! love reading all of this! i will try to add an image here to show in case you never make it to see the full project.

1 Like

Recreating a specific image using seed may not be 100% safe.

The following two images were generated twice in different sessions, using the same size, prompt, seed.

{
  "size": "1024x1024",
  "prompts": [
    "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder. "
  ],
  "seeds": [3075182356]
}

Comparing them at the pixel level, will find they are slightly different. Perhaps the DALL-3 model will change over time (e.g. online learning).


Edit (2023/11/4):

There is no way to recreate the image using this approach.

Detail see After upgrade seeds doesn't work, "generation ID" is introduced? - #61 by cassiellm

1 Like

Seeing doesn’t seem to be working correctly today or yesterday. Using a seed and prompt and regenerated it 3 times, i get 3 different results!! Is there some sort of bug or technical issue with seeding lately or images in general as its consistently deviating from clear prompts even when you tell it not to deviate.

After the update, you can not set the seed any longer.

Dall-E will use a seed and shows this in the output (if you ask for it), but you can not set it.

You can refer to the gen_id (generation ID) with the new parameter referenced_images_ids.

Seed it yourself

Also you can postfix your prompt with a random string of text, which acts (somehow) as a seed.

Tomorrow I am going to program a simple application myself in PHP to generate “prompt seeds” for this purpose.

If OpenAI is taking away my toys, I will develop them myself.

1 Like

You are talking a foreign language with your commands talk. I am a photographer and graphic designer! I don’t have access to APIs or anything of that kind. I just use the web based IU for ChatGPT.

When i would find a base image then seed it by assigning a number , i.e. Seed 1 (copying the base image’s prompt giving by Dalle) if i got a result and then regenerated it 3 times, it would show me 3 identical images. If I try that same process today, it doesn’t give three identical images. that’s my point.

1 Like