What is edit mode?

I only use Dall-E via the ChatGPT interface, no the API development environment.

1 Like

Sorry for my inaccurate terminology.

I mean that after editing a message or clicking the “Regenerate” button, ChatGPT will form a “dialogue tree”. In these situation, ChatGPT might forget some of the context information. Thus it give you fake information.

We know that ChatGPT has a window limit. It can only remember a certain amount of past text. Therefore, if conversation is too long, it might forget earlier messages.

The problem is that if you edit an earlier message or Regenerate an earlier reply, could ChatGPT still remember the context at that moment? From my experience, it seems not. Thus it usually give you fake information.

For example,

Show me the gen_id and seed of the last image you generated.

Here, can ChatGPT understand what I mean by “the last image you generated”? If you are editing an earlier message.

Ideally, ChatGPT SHOULD understand the “dialogue node” context of “the last image you generated” in the “dialogue tree”.

2 Likes

Just report my current status:.

Starting from today, I also have the same problem.

I can’t reproduce the previous image through seed and precise prompt.

A real example,

I:

Send this JSON data to the image generator, do not modify anything. After generating an image, show me the JSON data that the image generator returns to you.

```
{
  "size": "1024x1024",
  "prompts": [
    "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder."
  ],
  "seeds": [3075182356]
}
```

ChatGPT:

I’m sorry, but I can’t process the JSON data directly as provided. However, I can create an image for you based on the description you’ve given.

Let me generate the image for you based on the description, and I’ll show you the returned data from the image generator afterwards.


Here’s the returned data from the image generator:

{
  "gen_id": "oKXvBOFuKlpYCZCt",
  "prompt": "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder.",
  "seed": 3855703669
},
{
  "gen_id": "1EFfdicZM6iMEX02",
  "prompt": "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder.",
  "seed": 2368871415
}

Some other replies from ChatGPT:

Please note that the “prompts” field was changed to “prompt” and “seeds” field is not used in this context. The “n” field has been added to generate 2 images as per default policy.

The JSON format to send a prompt to DALL·E 3 through this interface is:

{
 "size": "1024x1024",  // Image size. Options include "1024x1024", "1792x1024" (wide), and "1024x1792" (portrait).
 "n": 2,               // Number of images to generate. Default is 2 if not specified.
 "prompt": "Your image description here",  // Detailed description of the image you want to generate.
 "referenced_image_ids": ["ImageID1", "ImageID2"] // Optional. If referencing previous images, include their IDs here.
}

Therefore, the most unfortunate thing happened:

I will never be able to iterate on those old images, because I can’t recreate them in ChatGPT.

Many of the sessions of those old nice images have already been deleted from my ChatGPT history. Locally, I only saved images, seeds, and precise prompts).

Even for those sessions that weren’t deleted, due to ChatGPT’s window limitations and the issue of forgetting “dialogue nodes”, I’m not sure if ChatGPT still remembers those gen_id.

It’s very inconvenient; every time you iterate on an original image, (1) you absolutely cannot delete a session (2) you have to be cautious about whether ChatGPT would forget the gen_id.

Welcome to the club of “upgraded” people… I’ve been there some while and it’s going up and down with the results.

But you mean your generation history will be deleted by the system?

Then I have to export the data as soon as possible and save the most important images / prompts / etc… in an Excelsheet, or something.

My findings (after a week “new system”)

You can recreate an existing image in the same session by it’s gen_id.

But outside the session (or after some time) this is not possible. And you can never get that image by using the exact prompt and / or seed; it just doesn’t work.

At the moment I am developing a seed generator based on the image ID, which looks promising; but’s that is seeds only, and in the same session.

(will update later today)

Not deleted by the system, but deleted by myself. Because some conversation branches were too complex to manage, I deleted them (but saved the images, the precise prompts and seeds locally).

After this update, I won’t be able to recreate them anymore in ChatGPT! E.g. I can no longer adjust them via adding some random string :sob:.

Yeah, but you can only export data, you cannot restore the session. No session, you cannot make iterations on an existing image.

Yeah, you must have the session (now seeds and exact prompts are meaningless) and ChatGPT might forget the gen_ids as well.

What is seed generator? Could you explain it further?

That’s what I am experiencing as well, for about a week now.

No seed, no joy

The “trick” used to be this;

  1. Define the exact same prompt.
  2. Refer / use the exact same seed.
  3. This will generate the exact same image.

But after the update, you can not set / refer the seed yourself (it’s still used, but you can’t manage it), resulting in different images.

Inside a session, you can create the exact same image now by referring to the image generation ID (gen_id) and pass that as the refering_image_id.

So within the same session, you can do what we always did before; but you can NOT inherit images from session A inside session B (because you can not set the seed and image ID's only work within the same session).

Maybe you can re-open an old session and refer there to the gen_id but I have to test that.

Update

I did open a session from two days ago. The image I created did have an image generation ID so I decided to try this;

Hi, I am back again after two days. What do you know about image cxBfLB4y53IidDp0?

The image with the generation ID cxBfLB4y53IidDp0 was one of the two images created in response to the initial request for a realistic photo of (…more text)

Okay, cool. I want you to create the exact same image once again, I want to recreate image cxBfLB4y53IidDp0. Do not change anything, just recreate image cxBfLB4y53IidDp0.

Result : the new image (after two days) is almost exact the same as the original image (two days old).

So this works. There are some minor changes, but that’s on pixel level, or subtle things like “the hook on the left (original image) has a different angle than the one on the right (new image)”.

But this is zoomed in, when you see the images at 100% the look / feel / design / composition, etc… is 99.95% the same.

Also the above example is an image generated on an older image, that was based on another image, so it’s two steps aways from the original image.

Seed generator

A thingy I am creating myself, which adds seeds to the referring image ID so you can “seed” the same image with some other outcomes.

I will create a post later on and mention you inside this thread.

How to inherit images from session A inside session B ?

I think you mean can not inherit images from session A inside session B?

But how do you manage the images now?

I mean, usually in order to filter out nice images, we need to continuously modify the prompt or repeatedly click the Regenerate button.

In the old system, once we choose a nice image, I just need to record the seed and prompt (or open a new session and recreate the image there), and then delete that session (which has very complex branches).

In the new system, we should not delete sessions (this would lose everything), but the session is very complex and hard to manage, because there are many unnecessary images within that session.

Thanks.

Yes, I made a typo.

But you can open an older session and re-create the image there (as long as you know the image ID).

I think that depends on the workflow, which we have to alter to match the new situation.

I personally create one image, ask for another, fiddle around and select the best.

Than I alter it in Photoshop.

Every session has about five images max (on average).

When I can not create “the perfect” image after even five tries, I start with a brand new prompt / session.


Also I name my session titles with specific keywords, like

:camera: - :star: :star: :star: - photo, stones, '80s (1#3)

:camera: - :star: :star: - photo, stones, '60s (2#3)

:pencil2: - :star: :star: :star: :star: - illustration, puppet, graphical (1#1)

:pencil2: - :star: - illustration, puppet, line drawing (2#4)

This way I can filter by rating (which I do set myself), type, subject, iteration, etc… (also because you can’t search in the web-interface).


The thing is still in beta, so they can change what they want and we must be as flexible as possible, to change as well.

After beta, I aspect a solid system without those annoyances.

1 Like

The “new” Dall-E, use seeds without seeds - Prompting - OpenAI Developer Forum

As promised.

I am creating a PHP script right now that generates seeds and creates better prompts by simply filling in some text fields.

It’s a stand alone app, maybe I will publish it but for now I am just using it for own purposes.

It makes generating good prompt easier, since it is using a boilerplate for style, design, idea, seed, reference, etc…

I was working on children’s coloring pages within a single session and had amazing consistency. Then I read about generation IDs and tried adding that in, and the results have gotten substantially worse. It drops details, ignores instructions. Like I told her to make four separate images of coloring pages, and it put four pages into one single image. The thing is going totally Skynet on me lol

1 Like

Yeah, but seeds are not references.

A reference is more like an entry into the database where all (meta-)data of your image is saved (for a short time in history).

You can get consistent look / feel and characters, but your prompting has to be consistent as well (regarding seeds, generation id, referencing id, text, etc…).

Afterall I don’t know if Dall-E “as is” is possible to create the same characters at all.


Semantics & taxonomy

But when you use the right semantics and taxonomy in your prompt, one can get good results (for a bèta software like this).

The quality of your prompt is key to success.

I guess I have misread this thread. I thought by using the generation ID, I could refer to an existing picture and ask it to duplicate those elements.

For example:

Create a detailed children's coloring pages featuring Dainy the Dragon Princess, following the visual style of the character in image with the Generation ID - vRWxVD7dQIgNfLh5. The page should feature Dainy in a unique fantasy setting, maintaining her chibi style with a large head, expressive eyes, and small body. The scene should depict Dainy stargazing on a clear night, sitting on a crescent moon with a telescope, surrounded by stars and planets. The outlines should be bold and black for easy coloring, with no shading or gradients, designed for use with crayons or colored pencils.

The Generation ID was determined by asking it for the generation id of the “previous picture.”

Yeah, things are getting messed up because they are changing things daily now, even the multi-dimentional array types / object.

For the sake of clarity I will show the most important topics I made for the current situation;

  1. Change of seed logic

  2. Seeds are gone, generation ID enters (this topic)

  3. Seeding without seeds

1 Like

I created a simple seed generator, see this post.

1 Like

You can add members to your team? Or is that API only?

But I agree not being able to seed is not a good thing.

I have read (somewhere) they did it to “force diversity in the created images” (preventing they look too much the same).

Your test

I tested again with your fixed code and once again it didn’t work.

The parameter prompts was wrong (ChatGPT fixed it to prompt) and seeds are not supported;

To generate the image, I will need to adjust the JSON data to match the format that the Dall-E system requires. The “prompts” key should be singular as “prompt” since Dall-E accepts one textual description for image generation. Additionally, Dall-E does not use a “seeds” value to influence the image generation process.

I asked him once again, forced him to tell the truth about why and how;

I must clarify that the current Dall-E system does not use seeds to reproduce the exact same image from a past generation. The image generation process involves randomness and, as of my last update, does not allow the use of specific seeds to replicate results. Each request is unique, and while the same prompt will result in similar thematic imagery, it will not produce an identical image to one generated previously.

Therefore, I can use the prompt you provided to generate a new image, but it will be different from the one created with the same prompt and seed in the past. Here’s the adjusted JSON data I will send to Dall-E:

Yeah, they have changed the JSON format.

This is not a good thing. Imagine, in an art-sharing community (e.g. DALLE3 Gallery for 2023: Share Your Creations), if I like someone’s image, or if someone wants to share their creativity, what should they do?

In the old system, you only needed to share the seed and the exact prompt; in the new system, you must share the whole session link. However, sharing the session link has its own problems, such as exposing unrelated conversation or any images I don’t want to share.

Of course, we can still share prompts, but they won’t have the same style. And for characters, they won’t be the same person.

Yeah, we can replicate them, but not copy them.

Maybe that’s what they want? A “diverse” system with unique images per session / person / prompt.

But within the same session you can finetune it with the image #ID.

Another issue I can think of is about the reproducibility and replicability in science.

Imagine a community for discussing prompt technology (like Discord). If person A wants to guide person B on “how to set up a camera”, in the old system, person B could replicate the same result just by using the same seed as person A. However, in the new system, because of the different seed, they are essentially discussing two completely different result.

I think that’s the purpose of “locking” the seeds.

They already stated that they wanted more diverse images.

So no same prompt with some seeds, not 4 images for one prompt, but just 2 (so you have to ask several times to get 4, resulting in more different images).

Somehow I think this is good, because the AI images are already quite recognizable right now. Imagine within 5 years; everybody will see like “hey, that’s just another AI image”.

With those new measures, the chance of getting the same image is smaller, thus more diverse in generation.

But I agree that we should have the option to seed, or not.

For my workflow it’s not quit important (yet), since I create one image and iterate on that, using the generation ID (and the scramble method).

1 Like

Thanks for the roadmap. It’s very helpful.