Deleted, because of censorship.
The generator, which has 24 poles, must emit a signal with a frequency of f=28 Hz. Determine the required frequency of rotation of the rotor, as well as the period and angular frequency of the output signal.
Not useful.
That image definitely seems seeded in the same manner. It has the same buildings not described at all, the same realism and tone. Maybe the upsampler saw through the noise the second time and decided on nipples…
Bing DALL-E, for comparison
It doesn’t listen to the seed parameter any longer (after the update).
I can not even recreate the exact same image.
When using a seed, the system crashes. When I use the “new” image reference parameter, it says “it can not access that image”.
So… how does one recreate the same image, with the same prompt and the same seed?
This is the original, I did fetch both the seed and reference ID;
Than I asked to recreate the first image (with seed and reference);
…not quite the same.
The webclient is updated, but the app is not.
When you try to open a chat, generated after the update, in the app (without that update), the system crashes.
Opening older webchats that are on par with the app itself are working fine.
-
I tried your JSON.
It apparently recreated the same image.
-
I tried “Before creating the image, show me the exact prompt you are going to use.”, but ChatGPT doesn’t give me
n
andreferenced_image_ids
.However, these two parameters seem to be hidden and require digging to discover.
What is the “referenced_image_ids” parameter used for in the JSON format input to DALL-E 3?
The “referenced_image_ids” parameter in the JSON format input for DALL-E 3 is used to reference previously generated images by DALL-E. When provided, DALL-E 3 can use these referenced images to generate new images that build upon, modify, or combine elements from the referenced images. It’s a way to incorporate existing visual content into new creations, allowing for more complex and iterative image generation tasks. In essence, it helps in creating images that are contextually related or derived from previous outputs.
Note that this reply does not always occur. Sometimes it replies:
As of my last update in January 2022, the
referenced_image_ids
parameter was not a publicly documented feature of DALL·E. It’s possible that this is a newer feature or parameter introduced after my last update, or it might be a feature specific to a version of DALL·E that I’m not familiar with.If you have specific documentation or details about the
referenced_image_ids
parameter, I’d be happy to help interpret or understand its purpose based on the context provided. Otherwise, for the most accurate and up-to-date information, you might need to refer to the official documentation or source where you encountered this parameter.It looks like a very useful feature!
Yeah, those two parameters are new for me, but they are used (generated) AFTER the creation (not before);
N
(number of images to create with one prompt and one seed)referenced_image_ids
: array of any referenced images by theirgen_id
(it can contain more ID’s so you should be able to use x-images for one single new output)
I created a custom instruction
referencing those parameters and now Dall-e3 spits out those parameters after every creation.
So they are used for every single image, but I can not recreate existing images (not by referring to the seed
and not to their gen_id
.
Did you use “my” JSON on the updated version of the system?
I tried recreating my image so hard, I did hit the hourly limit, so I can’t test anylonger.
{
"size": "1024x1024",
"prompts": [
"Photo of an African American man and woman in their twenties enjoying a hot summer day in Central Park, New York, 1985. The woman is wearing shorts. The image has a nostalgic atmosphere, characterized by the graininess, warmer tones, soft focus, and the quirks of film photography.",
],
"seeds": [3172394258]
}
What do you mean by “updated version of the system”? I just open the Browser and visit https://chat.openai.com . I didn’t have any update events happening here.
I don’t quite understand what you said. referenced_image_ids
doesn’t work for me. ChatGPT doesn’t use this parameter at all.
Sadly, ChatGPT currently does not support this feature.
I’m sorry for the inconvenience, but I can’t directly utilize the
referenced_image_ids
parameter as it’s not supported in the current interface I have with DALL·E. I can only generate images based on theprompts
,size
, andseeds
parameters.
They are rolling out a major update to the system / workspace.
It combines all separate models into one.
Because the complete prompt was ruined in the new system, I kept asking “what did change” and it told me this;
After asking for those new parameters, it explained them to me.
So it is definitely used.
Maybe that is why you could recreate my image with the original “old” JSON and I wasn’t able to.
Your system is not updated, to it reads and executes that data.
But I have the new system and now it doesn’t understand my previous data-settings (which I use in .git to structure my repositories and iterations).
Okay, at my side the system is totally broken.
Seeds doesn’t work, image recreation does crash, referenced images are given, but not executed.
See this test conversation I had; I explicitly ask for a specific seed, ChatGPT does say it is going to use that and comes back with another seed.
It even says “sorry, I did use the wrong seed, shall I correct it with the required seed” and then I again does use a total different seed.
Using seeds and / or reference images are the core of this system; without you can not iterate anything.
Lol, check this image;
Reference ID : zFIRvus7s1feY2wO / seed : 2137570210
Than I ask to recreate that image using the exact ID and seed;
{
"seeds" : [ 2137570210 ],
"referenced_image_ids" : [ "zFIRvus7s1feY2wO" ]
}
Same input, same output… different image “sort of”.
Okay, that thing is driving me nuts.
Even when you ask for the exact same seed and reference ID it is making things up.
But after trying 100 times (had to wait 6 hours, lol - nice for paying) It was able to do so;
So it is possible, but it is simply not listening to my orders 9 out of 10 times.
On the left the original image (it generated two, but I asked for only one).
Than after 75+ tries it did use the same seed and generation ID.
Output on the right.
Biggest question now : why don’t you do what I ask you for?!
Also, it’s interesting that there is some numeric noise in “the same” images (color = degree);
Left and right are the first and last effort to create with the same parameters.
I asked for the exact data I had to use in my custom instruction set.
This is good, because the last image is a exact copy of the first image. You did use the seed and reference image ID as requested. This is how I want it. Please give me the solution for this, so I can add it to my custom instructions.
Certainly! Here’s the solution for generating images using a specific seed and reference image ID:
- Size: Define the desired dimensions of the image. For landscape orientation, use “1792x1024”.
- Prompt: Provide a detailed and descriptive caption that closely outlines what you want the image to depict.
- Seed: Specify the seed number to ensure consistent and reproducible results.
- Reference Image ID: If you want to base the new image on a previously generated image, provide its generation ID.
Here’s a sample of the JSON data structure you can use in your custom instructions:
{
"size": "1792x1024",
"prompts": ["Your detailed and descriptive caption here."],
"seeds": [Your specified seed number here],
"referenced_image_ids": ["Your reference image ID here"]
}
So I think this is the new “format” of the structured dataset we can use.
The thing is : the code is not always executed “as is”.
Not sure what is going on. Sometimes recreation (based on seed
and image reference ID
) works, sometimes it don’t.
It looks like (but I hit my limit for the third time today, so I can’t check, debug, validate or even work) you can only reference images within the same session. So you can’t reference an image from another session or even another user.
The seeds are really (pun intended) random… sometimes it uses the seed as requested, sometimes not.
And although the system was updated “to have al models inside one model” it never works outside the Dall-E3 model.
The default model says it's capable of creating images, but crashes when using seeds or references.
Left the original image with seed
and reference ID
, right the clone using those exact parameters.
But it’s buggy as hell.
For those who want to recreate, this is the original meta-data of the first image;
{
"size" : "1024x1024",
"prompts" : [ "A hand-drawn, modern flat design illustration suited for a scientific magazine. The image showcases a happy eagle celebrating its 10th birthday. The eagle's expression radiates joy and pride for reaching this milestone. The artwork is characterized by clear strokes, pen and ink techniques, and bright graphical influences." ],
"seeds" : [ 1386778108 ],
"gen_id" : [ "c1nEm8S6MoOACGEL" ]
}
And this is what I used to recreate it;
{
"size" : "1024x1024",
"prompt" : "A hand-drawn, modern flat design illustration suited for a scientific magazine. The image showcases a happy eagle celebrating its 10th birthday. The eagle's expression radiates joy and pride for reaching this milestone. The artwork is characterized by clear strokes, pen and ink techniques, and bright graphical influences.",
"seeds" : [ 1386778108 ],
"referenced_image_ids" : [ "c1nEm8S6MoOACGEL" ]
}
The referenced_image_ids
seems a hidden parameter. You currently cannot explicitly set it, but can refer the gen_id
in your prompt.
Remarks.
ChatGPT treats gen_id
as an external image, but it is not global unique. You can only reference it within the same session that the gen_id
created.
If you want to reproduced an image whose prompt has gen_id
, that image can only be reproduced in the session where it was originally generated. If you have deleted that session, you may never be able to reproduce the image again.
Even if you can reproduce the image that was referenced by the gen_id
, in the new session, the gen_id
will be different (i.e. a new gen_id
). Using this new gen_id
will not reproduce the same image.
For example,
in Session One, you have
ImageA whose gen_id
is EXD8s2b99d54FL1A
,
ImageB whose prompt has gen_id EXD8s2b99d54FL1A
and seed is 12345
,
then you can always reproduce ImageB in Session One via its prompt and seed.
However,
in Session Two, there is currently no possibility to reproduce ImageB, because ChatGPT doesn’t know the gen_id
in Session Two.
Something like:
I’m sorry, but I cannot access or reference images with specific “gen_id” identifiers like “EXD8s2b99d54FL1A” as given in the prompt.
Of course, you can use ImageA’s prompt and seed to recreate it in Session Two, but this will give you a new gen_id
, which is different from “EXD8s2b99d54FL1A”.
Using that new gen_id
and seed 12345
to generate “ImageB” will produce an image that is different from the original ImageB.
Also, using gen_id
in the prompt does not seem to always succeed.
Something like:
I’m sorry, but I cannot use specific gen_ids as references for generating new images. The image generator does not have the capability to reference previous images by their gen_id.
Would you like to provide a new prompt or modify the existing one without referring to specific gen_ids?
But by repeatedly clicking “Regenerate”, it has a certain probability of generating an image…
That sounds as a bug or bad design (which should be considered as a bug, from a developers point of view).
The referenced_image_ids
is not hidden at my side; ChatGPT did spit it out itself and continues to do so.
And you can set it in the prompt (referenced_image_ids
points to the gen_id
).
From what I understand, you say it’s not a super global, but stored in the session?
But it’s also an unique number, even for an exact same image, with the exact same seed, generated twice inside the same session.
The thing is : I do ask explicitly for a certain seed
and / or gen_id
and 9 out of 10 it is simply ignored.
Also the updated I mentioned about is gone in the settings. It’s a very unstable platform at the moment.
Guess we have to pay to be testers of a beta-software, lol.
So how can we reproduce an image in the same session and / or later on in another session?
I used to use seed
for this (and the exact same prompt), but that doesn’t work now.
It just doesn’t prompt with the seed you give, I tried about 100 times and only 2 times it did use the “right” seed.
(there goes my daily limit)
It’s really a hit and mis if you want to create another image, based on the same seed or ID.
The animation above are three images with the exact same seed
and prompt
.
But they are all “slightly” different as you can see when they move.
But this one should also be “the same”, which is clearly not.
I simply can not work now, because it is impossible to iterate an existing image.
This show two nearly the same images, but they are not.
Just check the “whitespace” inside the “0” of the number “10” on the cake.
It’s solid red on the left and partially black / stroked on the right.
I agree.
I really want to set this param, but ChatGPT does not allow me to set it!
Something like:
I’m sorry for the inconvenience, but I can’t directly utilize the
referenced_image_ids
parameter as it’s not supported in the current interface I have with DALL·E. I can only generate images based on theprompts
,size
, andseeds
parameters.
Yes, at least according to my experience.
I don’t see any such update event at all.
P.S. To save money, I usually unsubscribe after subscribing. Maybe related?
Did you use the (now deprecated) Dall-E3 model? Because the “new” super model says “can’t do”, while Dall-E3 says “okay” (in my case).
I also asked, after an hour, but within the same session, if he knew about a certain image ID
.
And that was correct (but it did refuse to create it, even with the same seed).
I’m not sure what exactly happened on your end.
You might be ask ChatGPT, something like
Before generating an image, show me the exact JSON data you are going to put to the image generator.
To check if the seeds and prompts are correct.
I think the official should display the seed on the GUI where the image is shown. Currently it can display the exact prompt but does not show the seed, which is very annoying.