Okay, so you can grab data from the NEW model and execute it in the OLD model.

But it can’t be done vice versa, because the NEW model does not execute the OLD data (with a seed).


// For the API (if they did it good) this is the same;

"seeds" : 43289324890

// or this;

"seeds" : [ 43289324890 ]

// and even this;

"seeds" : [ "43289324890" ]

// or with more entries;

"seeds" : [ 43289324890, "324893428904", 3248439280 ]

All values are valid, because the phyton script will try to see what the intent is.

Is it an array of seeds? A single seed? A numeric seed? A string seed? Etc…

At the end it will parse the value, if valid.

Sometimes I try to trick ChatGPT in order to see if my code is good.

So I ask something like “`use this code, correct when wrong’”;

"seeds" : 3298238904,
"bananas" : [ 'yummy', 'delicious' ]

Than it removes bananas (and keeps the seed) , so I know the code-testing is done correctly.

But now it is removing both bananas AND the seed, because it does not support seeding.

Recreating a specific image using seed may not be 100% safe.

The following two images were generated twice in different sessions, using the same size, prompt, seed.

Comparing them at the pixel level, will find they are slightly different. Perhaps the DALL-3 model will change over time (e.g. online learning).

That is called “numeric noise”, as an equivalent to analog grain.

I have seen the same when using FileDiff software at pixel level

The biggest problem is (in the new system) that you can never, ever recreate an existing image after some time.

Even you use the exact same prompt, a different seed is attached and you can not change it.

Even in the same session, when it is bigger than 32k tokens, the memory in that session is reset.

Resulting in not being able to finetune any exciting image.

1 Like

I hope you don’t mind me asking, how do you adjust the camera angle? I’ve tried the following prompt, but it doesn’t come out as perfectly as you.

{
  "size": "1792x1024",
  "prompts": [
    "Japanese anime style. The scene is captured from a high angle view. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder."
  ],
  "seeds": [2275396183]
}

That High Angle view doesn’t seem to work for me.

Tried Bird’s eye view:

{
  "size": "1792x1024",
  "prompts": [
    "Japanese anime style. Bird’s Eye View. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder."
  ],
  "seeds": [2275396183]
}

I once asked him what camera angles he did understand and how they work out.

https://chat.openai.com/share/232f01cc-47ed-495c-a25c-47ef32eebfbe

So if it is important, I start the chat with a reference in the prompt to the exact camera usage.

UPDATE

ChatGPT did not send the gen_id of that particular image (don’t know why) and now he is not able to re-create it once again.

I ask him “do you know about that image” and he says “yeah, must I recreate it?”.

Than I am like “do it” and he says “it’s too long ago, I can not recreate that image”.

So without the gen_id or waiting too long, you can never get that image again, even not in the same session.

The concept with “unique prompt / seed” is way more solid and future proof.

Yeah, I once asked ChatGPT similar questions. It just asked me to place the camera angle at the beginning or the end of the prompt. The result is not all camera angles work as expected.

I just asked ChatGPT again. He suggest me refine the prompt to ensure that the “high angle” perspective is more explicitly integrated, i.e. add The surrounding dungeon walls are adorned with faint torches that cast minimal light. to the prompt.

{
  "size": "1792x1024",
  "prompts": [
    "Japanese anime style. From a high angle perspective, looking downward into a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder. The surrounding dungeon walls are adorned with faint torches that cast minimal light."
  ],
  "seeds": [2275396183]
}

It seems acceptable.

Without the sentence The surrounding dungeon walls are adorned with faint torches that cast minimal light., the image would look very weird.

Maybe referenced_image_ids can rescue.

If you explicitly ask ChatGPT to show you the gen_id of an image, does it return something?

I always - since a few days - retrieve those things, which than can be used in the referenced_image_ids in order to point to that particular image.

Not sure why I have access to those parameters (and did lost the seed parameter).

If I ask ChatGPT “Show me the gen_id and seed of the last image you generated.” it will give me the correct gen_id and seed. However, sometimes it would give me fake seeds if I ask it in edit mode.

I even can reference gen_id in prompt, but there is no the referenced_image_ids paramter.

What is edit mode?

I only use Dall-E via the ChatGPT interface, no the API development environment.

1 Like

Sorry for my inaccurate terminology.

I mean that after editing a message or clicking the “Regenerate” button, ChatGPT will form a “dialogue tree”. In these situation, ChatGPT might forget some of the context information. Thus it give you fake information.

We know that ChatGPT has a window limit. It can only remember a certain amount of past text. Therefore, if conversation is too long, it might forget earlier messages.

The problem is that if you edit an earlier message or Regenerate an earlier reply, could ChatGPT still remember the context at that moment? From my experience, it seems not. Thus it usually give you fake information.

For example,

Show me the gen_id and seed of the last image you generated.

Here, can ChatGPT understand what I mean by “the last image you generated”? If you are editing an earlier message.

Ideally, ChatGPT SHOULD understand the “dialogue node” context of “the last image you generated” in the “dialogue tree”.

2 Likes

Just report my current status:.

Starting from today, I also have the same problem.

I can’t reproduce the previous image through seed and precise prompt.

A real example,

I:

Send this JSON data to the image generator, do not modify anything. After generating an image, show me the JSON data that the image generator returns to you.

```
{
  "size": "1024x1024",
  "prompts": [
    "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder."
  ],
  "seeds": [3075182356]
}
```

ChatGPT:

I’m sorry, but I can’t process the JSON data directly as provided. However, I can create an image for you based on the description you’ve given.

Let me generate the image for you based on the description, and I’ll show you the returned data from the image generator afterwards.


Here’s the returned data from the image generator:

{
  "gen_id": "oKXvBOFuKlpYCZCt",
  "prompt": "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder.",
  "seed": 3855703669
},
{
  "gen_id": "1EFfdicZM6iMEX02",
  "prompt": "Japanese anime style. In a dimly lit dungeon, a fearsome beast with sharp claws and glowing blue eyes stands guard, ready to attack any intruder.",
  "seed": 2368871415
}

Some other replies from ChatGPT:

Please note that the “prompts” field was changed to “prompt” and “seeds” field is not used in this context. The “n” field has been added to generate 2 images as per default policy.

The JSON format to send a prompt to DALL·E 3 through this interface is:

{
 "size": "1024x1024",  // Image size. Options include "1024x1024", "1792x1024" (wide), and "1024x1792" (portrait).
 "n": 2,               // Number of images to generate. Default is 2 if not specified.
 "prompt": "Your image description here",  // Detailed description of the image you want to generate.
 "referenced_image_ids": ["ImageID1", "ImageID2"] // Optional. If referencing previous images, include their IDs here.
}

Therefore, the most unfortunate thing happened:

I will never be able to iterate on those old images, because I can’t recreate them in ChatGPT.

Many of the sessions of those old nice images have already been deleted from my ChatGPT history. Locally, I only saved images, seeds, and precise prompts).

Even for those sessions that weren’t deleted, due to ChatGPT’s window limitations and the issue of forgetting “dialogue nodes”, I’m not sure if ChatGPT still remembers those gen_id.

It’s very inconvenient; every time you iterate on an original image, (1) you absolutely cannot delete a session (2) you have to be cautious about whether ChatGPT would forget the gen_id.

Welcome to the club of “upgraded” people… I’ve been there some while and it’s going up and down with the results.

But you mean your generation history will be deleted by the system?

Then I have to export the data as soon as possible and save the most important images / prompts / etc… in an Excelsheet, or something.

My findings (after a week “new system”)

You can recreate an existing image in the same session by it’s gen_id.

But outside the session (or after some time) this is not possible. And you can never get that image by using the exact prompt and / or seed; it just doesn’t work.

At the moment I am developing a seed generator based on the image ID, which looks promising; but’s that is seeds only, and in the same session.

(will update later today)

Not deleted by the system, but deleted by myself. Because some conversation branches were too complex to manage, I deleted them (but saved the images, the precise prompts and seeds locally).

After this update, I won’t be able to recreate them anymore in ChatGPT! E.g. I can no longer adjust them via adding some random string :sob:.

Yeah, but you can only export data, you cannot restore the session. No session, you cannot make iterations on an existing image.

Yeah, you must have the session (now seeds and exact prompts are meaningless) and ChatGPT might forget the gen_ids as well.

What is seed generator? Could you explain it further?

That’s what I am experiencing as well, for about a week now.

No seed, no joy

The “trick” used to be this;

  1. Define the exact same prompt.
  2. Refer / use the exact same seed.
  3. This will generate the exact same image.

But after the update, you can not set / refer the seed yourself (it’s still used, but you can’t manage it), resulting in different images.

Inside a session, you can create the exact same image now by referring to the image generation ID (gen_id) and pass that as the refering_image_id.

So within the same session, you can do what we always did before; but you can NOT inherit images from session A inside session B (because you can not set the seed and image ID's only work within the same session).

Maybe you can re-open an old session and refer there to the gen_id but I have to test that.

Update

I did open a session from two days ago. The image I created did have an image generation ID so I decided to try this;

Hi, I am back again after two days. What do you know about image cxBfLB4y53IidDp0?

The image with the generation ID cxBfLB4y53IidDp0 was one of the two images created in response to the initial request for a realistic photo of (…more text)

Okay, cool. I want you to create the exact same image once again, I want to recreate image cxBfLB4y53IidDp0. Do not change anything, just recreate image cxBfLB4y53IidDp0.

Result : the new image (after two days) is almost exact the same as the original image (two days old).

So this works. There are some minor changes, but that’s on pixel level, or subtle things like “the hook on the left (original image) has a different angle than the one on the right (new image)”.

But this is zoomed in, when you see the images at 100% the look / feel / design / composition, etc… is 99.95% the same.

Also the above example is an image generated on an older image, that was based on another image, so it’s two steps aways from the original image.

Seed generator

A thingy I am creating myself, which adds seeds to the referring image ID so you can “seed” the same image with some other outcomes.

I will create a post later on and mention you inside this thread.

How to inherit images from session A inside session B ?

I think you mean can not inherit images from session A inside session B?

But how do you manage the images now?

I mean, usually in order to filter out nice images, we need to continuously modify the prompt or repeatedly click the Regenerate button.

In the old system, once we choose a nice image, I just need to record the seed and prompt (or open a new session and recreate the image there), and then delete that session (which has very complex branches).

In the new system, we should not delete sessions (this would lose everything), but the session is very complex and hard to manage, because there are many unnecessary images within that session.

Thanks.

Yes, I made a typo.

But you can open an older session and re-create the image there (as long as you know the image ID).

I think that depends on the workflow, which we have to alter to match the new situation.

I personally create one image, ask for another, fiddle around and select the best.

Than I alter it in Photoshop.

Every session has about five images max (on average).

When I can not create “the perfect” image after even five tries, I start with a brand new prompt / session.


Also I name my session titles with specific keywords, like

:camera: - :star: :star: :star: - photo, stones, '80s (1#3)

:camera: - :star: :star: - photo, stones, '60s (2#3)

:pencil2: - :star: :star: :star: :star: - illustration, puppet, graphical (1#1)

:pencil2: - :star: - illustration, puppet, line drawing (2#4)

This way I can filter by rating (which I do set myself), type, subject, iteration, etc… (also because you can’t search in the web-interface).


The thing is still in beta, so they can change what they want and we must be as flexible as possible, to change as well.

After beta, I aspect a solid system without those annoyances.

1 Like

The “new” Dall-E, use seeds without seeds - Prompting - OpenAI Developer Forum

As promised.

I am creating a PHP script right now that generates seeds and creates better prompts by simply filling in some text fields.

It’s a stand alone app, maybe I will publish it but for now I am just using it for own purposes.

It makes generating good prompt easier, since it is using a boilerplate for style, design, idea, seed, reference, etc…

I was working on children’s coloring pages within a single session and had amazing consistency. Then I read about generation IDs and tried adding that in, and the results have gotten substantially worse. It drops details, ignores instructions. Like I told her to make four separate images of coloring pages, and it put four pages into one single image. The thing is going totally Skynet on me lol

1 Like

Yeah, but seeds are not references.

A reference is more like an entry into the database where all (meta-)data of your image is saved (for a short time in history).

You can get consistent look / feel and characters, but your prompting has to be consistent as well (regarding seeds, generation id, referencing id, text, etc…).

Afterall I don’t know if Dall-E “as is” is possible to create the same characters at all.


Semantics & taxonomy

But when you use the right semantics and taxonomy in your prompt, one can get good results (for a bèta software like this).

The quality of your prompt is key to success.