Deleted, because of censorship.
In short : the complexity (length, numbers, digits, special characters, etcâŠ) determine the strength of the seed.
How more complex the string, the more distance the seed will create between the original image and the new one.
I donât know the mathematical code (key) when âa difference kicks in at which point
â, but in short thatâs how itâs works.
Create an image of a laughing clown, partying rock steady.
Create an image of a laughing clown, partying rock steady.
FJ33Create an image of a laughing clown, partying rock steady.
FJ33AVJ38Create an image of a laughing clown, partying rock steady.
FJ33-AVJ_38!Create an image of a laughing clown, partying rock steady.
FJ33-AVJ_38!3B49~0
Always refer to image 1
, use the exact same prompt and ad some ânoiseâ at the end. Every step will differ more from the starting point.
I tried the method once again and it still works after 4 hours (which is amazing, since OpenAI is changing things per minute nowâŠ).
But the wokeness of the system drives me nuts.
Donât smoke
I took this image as my starting point;
I wanted to add some scrambled code at the end to create a âseed
â. But ChatGPT did change my prompt every single time (it kept the prompt, but removed the scrambled part).
After asking 5 times he made clear why he did it;
I am not allowed to create pictures of people smoking.
BUT YOU JUST DID IT!
Donât copy
So I decided to create the same image as a starting point, but now with lollypops;
Again, I added a âseed
â in the prompt and asked to regenerate the image with some scrambled text on the end⊠it refused⊠becauseâŠ
I am not allowed to create images that looks like existing styles, e.g. âAmerica '50s commercial posterâ.
BUT YOU JUST DID IT!
Final try
The image above is my starting image, letâs says itâs
#ID
isABC123
.
I referenced
ABC123
and added a string of16 characters
at the end, you can see small differences in the hand of the lady.
Than I referenced
ABC123
once again and added a string of32 characters
at the end, not only is the hand changing once again, but the whole style becomes slightly different (more clear, more flat). The#ID
of this image isDEF456
.
Finally I referenced
DEF456
(the last image, which was referencingABC123
) and changed the color of the ladies hair and dress.
Conclusion
It does work, but be sure to check the send JSON data BEFORE image creation and AFTER, because the system is changing things without saying it does, because of âcontent policiesâ.
For those needing seeds, I created a simple script that generates some in different length / strength.
<?php
function app__seed( $app__seed_limit ) {
$app__seed_chrs = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ~!@#$%^&*()_+-=';
$app__seed = '';
for ( $app__seed_index = 0; $app__seed_index < $app__seed_limit; $app__seed_index++ ) {
$app__chr = trim( $app__seed_chrs[ rand( 0, strlen ( $app__seed_chrs ) - 1 ) ] );
if ( empty ( $app__chr ) ) {
$app__chr = '0';
}
$app__seed .= $app__chr;
}
echo '<li><span>' . sprintf( '%03d', strlen( $app__seed ) ) . '</span>' . ' <input type="text" value="' . $app__seed . '" autocorrect="false" readonly></li>';
}
?><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>seeds</title>
<link rel="shortcut icon" href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAIAAAACACAYAAADDPmHLAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDcuMi1jMDAwIDc5LjU2NmViYzViNCwgMjAyMi8wNS8wOS0wODoyNTo1NSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIDIzLjQgKFdpbmRvd3MpIiB4bXBNTTpJbnN0YW5jZUlEPSJ4bXAuaWlkOjcyQkFFRkQzN0I2NDExRUVCNDUyOTE1OUVFRkMzMzEyIiB4bXBNTTpEb2N1bWVudElEPSJ4bXAuZGlkOjcyQkFFRkQ0N0I2NDExRUVCNDUyOTE1OUVFRkMzMzEyIj4gPHhtcE1NOkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6NzJCQUVGRDE3QjY0MTFFRUI0NTI5MTU5RUVGQzMzMTIiIHN0UmVmOmRvY3VtZW50SUQ9InhtcC5kaWQ6NzJCQUVGRDI3QjY0MTFFRUI0NTI5MTU5RUVGQzMzMTIiLz4gPC9yZGY6RGVzY3JpcHRpb24+IDwvcmRmOlJERj4gPC94OnhtcG1ldGE+IDw/eHBhY2tldCBlbmQ9InIiPz7dy8RhAAAIKklEQVR42uxdC5COVRh+hS13lkQ1tamkKZUuSo1CNV10oYskTWOiMqhmaiomkppBpntpupAR00VSjVS6DpkpbUUiSoRiWDJhd4W1vU/f2ea3du3/f3vO953L+8w886/1f9+e77zP957znst76pSXl5MgXBwkVSACEIgABCIAgQhAIAIQiAAEIgCBCEAgAhCIAAQiAIEIQCACEIgABD6iXm1vMGbMmNDqLI/ZinkM8zBmAbMFsy3zUGZjZjNmI+YhinnqZavL3MvczSxTLGXuUCxh/sXcyNzK3MBcyyxirmZuqVyYUaNGpSsAz3EE81RmR2Z75vHqs2UKdQehrGeuYi5lTmYuSd0DeNYcdmJ2UTyN2cGiZrIB81jFi5lNmANFALUDKrMHszuzK/NIh8re1oo+gIM4gXkVsxfzXIefo50IIHvkM69j9mNe4MkzFahmYLsIoHp0Zg5i9mE29ezZEF0cpTqEIoBKuJI5THWWfI9SRAAZ6M281/G2PdGOoC8CwJv+IPP8wDq0bUIXAMK4caqDFyJa1/YGLs8FjGCuCNj4FdFNcB7gDOYLzLNIkB+aB7iPWSjG/x/NQvEAeNBpzCvE5vugUQgCwNs+S8W8gn3R0PcmAEO3C8X4YQrgAeZ0sbFZD25rEzCBohE9QYACeI45RGybFer71gQ866nxsQ5wl5cuRCPGMoc6aFwYFos31zA3q58rPpcxm1M0V3ETaRi4qYQyXwRwp+r02Y5tzB+Z31I0DYuh6F8pWsVbFbCu8GaKViDlG/IszgugJ/Npi42OMHQu80tl+G1ZXDNEibq94bI57wEwmzfbQqMvYM5gzlFveLYYwBxO0fLxJFDqugA+t8jofzOnMqcwv8/x2suZDzPPTLjMJS4L4C2K1rSljQ0q+niRol05uXqw8cxrUyq7sx4AQ7zXW9Che4z5RMyKRKf1UYq2e6UFJz0AesNpD/FOUm11UYxrMTk1MQV3XxW21vYGaQwEvZ1iha1QMfnAmMYfraICG4xPMZqs1D0Alm91T6mysIpoaMzYub3qHHaxLFopckkAaCtfTamiMBgzLea1CO1eIjvnTTa5JAD0lhsnXEF/qBAt7jZqGH4Q2Yv1rggA69fvSbhyfmBeGLOjVMCcyTyd7MafrgjgqYQrZh6zGzPOcSiXKOM3IvuxzoUoAG/TDQlWynyKdgDHMT46iR85YvzNOjxAEgIYn2ClLKb428Mep2hE0BX8Rg5MBmHzYp8Ee8Tnxbx2Brm3w2iljpuYFsDwBCsExi+OEZp+qvoLruFn2wWQl2AI1TvGG4GQFNO+p5CbWGq7AG6hKIuFaTzDfDfHa5Dm7WvmceQuFtsugGEJtYN35XgNkjx+R3ZMRdcm/l+t40amooCTKEquaBq57hXM98D4QKGuG5kSwICEwssVObb5PhifVPNltQD6Ga4AzILlsoq4rqq0AvIDC2wWQGfSlMXyAMg1RepXqlnyAcW2ewDTS70WMd/P4ftYdXwO+QMMde+2WQC9DFfA4By+i0UgPckvfKzzZroF0M5wbF2Yg/u7n3kH+YcPbBbAZYYfPtuOH7ZijfPQ+L9QbhtVEhdAd8MP/1kW30M28PfIT8zUfUPdAuhq8OEnZPEdjGzOI3/xus0CwMhfa0MPjo0bU7P43ocGy5A24PqX2CyAsw0rv6YECzi96iKP3/7JJm6qUwCdDT78yzX8fzfmSPIbk2wXQCdDD76uhtCvoe7QyEKgaSuyWQCY9z/R0MPXtJVsNmnIl2c5xpq6sS4BYJzd1EraA4U+d1N6W82SAhZ/zrddAB0MlQ+bOqqb+SpgPkn+4yGTN9clAFMpUeYe4P/mBGB8rP2f7oIATCVDqm7kb6TBPodNMJ45TZcACgyVr6q272gV8/uOTaZCPxMCMHHkKnL3LK/i97MoDCSSMVWHAHBk+uEGyrawit/dbnC8wSb8RAllUtEhABjfRKKkyqnaEGZODOTt75/UH9LlAUy9BZl4hdw+5SxbIIvKYpcE0MZQ2TL3viEpU98AjI+DoG9L8g/qEEArQxWxKuPfrwXi+rGgdo9rAmhhoFwY/vxH/XwjmRtptAlTSPOCz6QEYCINembmi+cDMD7OGhiQxh/WIYCmBspVkX9/hCEPYxtSW8iiQwAmpmIrxgAeCcD4yKCy0mUBmMj9hzGAawII+5BsekaaBdCRH6CB5jIh8RGyfA323PiY5Ut9GVs9S+6Rib0BtP3o7fe3oSA6XGx9zWWq77nxsW/hUlsKY6MAfMYXqnkjnwQgyA5YwdTDtkLpEEC52LZGYFOHldvUxQOYx2jmrbYWTkcPvkxsXG000zftOD8JD7BLbL0fcFZBR9uNr0sAu8Xe+wCZS3HQxDIXCltPPIA2YAq74rwBZ6DDAxSL7f87a6CDa8bX5QF2BGx4DOli88YiVx9APEA84Bj5qyka0l3k8oPo8ADbAzM8zhp+w5cH0iGArQEY/hOKlqZ5l31MBFA9cC4vjrjH/rxCX5UtAtgfSMfyJvOdEJo3HQLY4ngd7KRoGzoSUCMfwe8h9WZ1CKDIwefGtrMFyvBYoLEx1DhWhwA2O9CWY1gWp4V8o3ryK0mg1QMgiyUSN+SlbGiUA2nlsK9weQZLxNTmBIC5gJOZzSnaKNpSsbX6xO+xgxjr/LCHoAlFm0nw88EULSnD9vI6FE0slSnuUYYrVtypjLxRfaLzuVZ5oDVKiKVi0txQp7xcFvSEDFkRJAIQiAAEIgCBCEAgAhCIAAQiAIEIQCACEIgABCIAgQhAIAIQiAAEXuJfAQYAGFldADihnscAAAAASUVORK5CYII=">
<style>
body,
input {
font-family: monospace, 'Courier';
}
body {
background-color: #222;
margin: 4rem 1rem;
}
span {
opacity: .5;
font-size: 75%;
display: inline-block;
width: 4rem;
}
input,
li {
color: #8b8b8b;
font-size: 2rem;
font-weight: bold;
}
input {
background-color: #333;
border-radius: .5rem;
border: none;
width: calc( 100% - 8rem );
padding: .5rem 1rem;
outline: none;
cursor: pointer;
}
input.copied {
background-color: #363;
color: #163716;
}
ol {
list-style: none;
padding: 0;
}
li {
margin: 1rem 0;
}
div {
position: absolute;
top: 0;
left: 0;
width: 0;
height: .75rem;
padding: .5rem 0 .75rem 0;
font-weight: bold;
background-color: #111;
color: #444;
transition: width 10s ease-out;
text-align: center;
}
div.fired {
width: 100%;
}
</style>
</head>
<body>
<div></div>
<ol>
<?php
app__seed( 8 );
app__seed( 16 );
app__seed( 32 );
app__seed( 64 );
app__seed( 128 );
?>
</ol>
<script>
let $app__progress = document.getElementsByTagName( 'div' )[ 0 ];
let $app__progress_delay = parseFloat( getComputedStyle( $app__progress ) [ 'transitionDuration' ] );
$app__progress.innerHTML = $app__progress_delay + 'S';
$app__progress.addEventListener( 'transitionend', function() {
location.href = location.href;
} );
setTimeout (
function() {
$app__progress.classList.add( 'fired' );
}, 10 );
document.body.addEventListener( 'click', function( $app__seed_event ) {
let $app__seed_next = $app__seed_event.target;
let $app__seed_prev = document.getElementsByClassName( 'copied' );
if ( $app__seed_next.nodeName.toLowerCase() === 'input' ) {
navigator.clipboard.writeText( $app__seed_next.value ).then( function() {
if ( $app__seed_prev.length ) {
$app__seed_prev[ 0 ].classList.remove( 'copied' );
}
$app__seed_next.classList.add( 'copied' );
$app__seed_next.blur();
}, function( $app__seed_error ) {
alert( 'Copy not succeeded.' )
} );
}
$app__seed_event.stopPropagation();
} );
</script>
</body>
</html>
You can copy / paste the code (which is a simple mixture of PHP / JS / CSS and HTML) and save the file on a webserver (that supports PHP).
Than open it in your browser, it will automatically refresh every 10 seconds (so new seeds are generated).
A single click / touch on a seed will copy the value to your local devices clipboard (so you can paste it straight into ChatGPT).
Itâs plain text, nothing encrypted or something (but I canât give support for it, just take it or delete it).
And the âlong encrypted (hashed) line of textâ is nothing more than a base64 encoded image of a⊠seed (the icon for the page);
Itâs an all in one file, so no separate assets needed.
This is gold, thank you very much.
Do you know if this can also be used with Dall-e API?
This will create an exact copy of the previous image, although there can be some minor changes (numeric noise, the digital equivalent of analogue grain / noise).
It appears this no longer works. When I asked DALL-E 3 why it was not working as you said, it basically said that seeds were required to replicate the same image, and that having a referenced_image_ids
only gave it an image to base it off of. It gave me this example:
Imagine youâre telling a story about a magical forest.
- Just using the same prompt: Itâs like asking a friend to draw the magical forest based only on your story. Each time you ask, your friend might draw a different looking forest because you only gave the general idea.
- Using the same prompt and an image reference: Now, imagine showing your friend a specific picture of a magical forest and then asking them to draw it again based on your story and that picture. The new drawing will likely be more similar to the picture you showed, but it still wonât be exactly the same, because your friend might remember some details differently or add their own touches.
In short, using an image reference helps guide DALL·E to create an image closer to the reference, but it still might have some variations each time.
I donât know, I donât have access to the API.
Thatâs Dall-E2 I guess?
But I thing ChatGPT is just a front-end for the API and that the end-points on the server are the same.
So when you state clearly you want to add some random stuff (AND DO NOT TOUCH IT!), it must work (with Dall-E3 at least, which doesnât have a API right nowâŠ).
Important notice
ChatGPT acts smarts, but is stupid.
It keeps removing the scrambled text, even when I say DO NOT ALTER IT!, because he says âit's useless
â.
So before creating an image, be sure to check what he is going to send to Dall-E.
Also, with longer string, he things itâs rocket science;
I must clarify that the last part of the text youâve provided seems to be a string of characters that do not form coherent or relevant content to the image description. It resembles an encryption key or a random string, which doesnât influence the image creation process.
If itâs intended to be part of the image, please advise on its purpose. If itâs not essential to the content of the image, I will proceed without it to ensure the integrity and quality of the image produced. Please confirm how you would like to proceed.
I told him âI just wanted to see how thing worked out with that stringâ, and then he used it.
Up to +/- 32 characters is no problem, but with 64 characters or more he starts complaining (not sure if this depends on the total length of your string, or that 64 is a fixed integer).
It works both perfect and flawless.
I just reopened an older session (four days old) and ask to recreate an image by itâs ID
.
It was not a 100% copy (but I donât need that, since I have the original I am referring to).
There was numeric noise
present (the digital equivalent to analoge grain), but the look / feel / composition / etc⊠was spot on.
Example
The image above is my original image I created days ago.
This image (above) is a recreation, based on the ID of the original. There are differences (check the foot on the right, of the penguin in the middle for instance), but I think the atmosphere / design / characters / etc⊠are quite the same.
This again a recreation, but with 32 random characters added at the end (the shoes are gone).
And here I added 64 characters, just to see how much of a difference it would make (the chicken went into fries and shrimps⊠I donât know why, but hey⊠thatâs what scrambling does).
Finetuning
Once again, this is the original and it is serving chicken and pasta.
I referenced the image and told him to serve pasta only.
Final testâŠI donât like pasta, give me chicken only.
I just tried via API connecting to Dall-e 2. I couldnât find any chain of characters that resembles the gen_id, which is supposed to be 16 chars long.
I checked the EXIF data and the URL of the generated image.
And according to what I asked chatgpt, they donât provide this data explicitly.
Did anyone have any success with this, IF itâs actually possible?
Itâs Dall-E 3 exclusive.
They introduced it about a week ago.
The gen_id
is sort of the single point of entry to fetch all (meta-)data for an image.
It contains the prompt, size (dimensions), timestamp, references, etcâŠ
As long as you have access to the session, you can retrieve this data (and iterate any image based upon it).
Raw export
Can you give me the
generation ID
of the last image and tell me if there were anyreferenced image ID's
used?
The generation ID
for the last image is 7DXoYydUrrdyEgBh
. Yes, there was a referenced image ID
used, which is IxaLAlUsK5rhpyt1
.
What is the size of
7DXoYydUrrdyEgBh
?
The size of the image with the generation ID 7DXoYydUrrdyEgBh
is 1024x1024
pixels.
Screenshot
Ok I just figured it out. I read what you were saying in another thread, that it has to be in the same session for it to work. So thanks for that.
I hope they bring back being able change seeds, I was making some really cool images with those.
Thanks a lot for the detailed analysis. I did an earlier analysis of the seed (in the ancient times when the seed was clamped at 5000âŠ), got very happy when the seed was allowed to change, but that was ephemeral.
I am frankly still a bit unclear about how gen_id exactly works under the hood, and it is annoying that it doesnât work across different sessions, but well, better than nothing.
Grammatical noise is what we used in the ancient times of seed=5000, but unfortunately it does not quite enforce the same degree of variability that actually changing the seed would have. But again, better than nothing.
It worked perfectly for me, thank you!
// sudoLang implementation
Dalle {
// State to store image requests and generated images
imageRequests: []
generatedImages: []
referencedImageIds: [] // New field to store image IDs for referencing
// Method to create an image based on a description using JSON format
text2imJSON(description, size = "1024x1024", n = 2, referenced_image_ids = []) {
// Convert the description and other parameters to a JSON format
jsonPrompt = {
"size": size,
"n": n,
"prompt": description,
"referenced_image_ids": referenced_image_ids
}
// Create an image based on the JSON formatted prompt
text2im(jsonPrompt) |> add to generatedImages
// Store the referenced image IDs
add referenced_image_ids to referencedImageIds
}
// Method to replicate an image using its generation ID
replicateByGenId(gen_id, grammatical_noise = "") {
// Use the same prompt and gen_id to replicate the image
// 'grammatical_noise' can be added to the prompt for variations
originalPrompt = findPromptByGenId(gen_id)
newPrompt = originalPrompt + " " + grammatical_noise
text2imJSON(newPrompt, "1024x1024", 1, [gen_id])
}
// Helper method to find the original prompt by gen_id
findPromptByGenId(gen_id) {
return imageRequests.find(request => request.gen_id === gen_id).prompt
}
// Method to get the list of generated images along with metadata
getGeneratedImagesWithMetadata() {
// Return the list of generated images with metadata such as gen_id and seed
return generatedImages.map(image => {
return {
image: image,
gen_id: image.gen_id,
seed: image.seed
}
})
}
}
Yeah, the freedom of seeds
was the best thing ever.
I think you can see the gen_id
as a kind âwrapperâ of all the information regarding an existing image.
So when you have a gen_id
and use it as a reference for a new image in the referencing_image_ids
that single gen_id
contains things like;
- Original dimension
- Prompt
- Seed (which is still used, but canât be set)
- Metadata
- Context
- More things we donât even know ofâŠ
In stead of repeating all those separate parameters, you can âjustâ refer to the ID
and all things that âmade that image to the image it becameâ are encapsulated in it⊠(I think).
Was anyone able to retrieve the gen_id via dall-e 3 API?
I donât use the API, but at least when the API was launched, the gen_id
was not available (by API).
@Foo-Bar Thanks for putting this detailed explanation together.
I am a little confused with the process/utility of the noise creation aspect of this as opposed to focused iterative prompting.
Below are some examples of character (I like to call him Alabaster Cowley) images I created without focusing on adding noise and instead used iterative prompting. The noise seems to only increase the level of randomness (as varying the seed used to) within the image.
I am not typically looking for randomness in the majority of my image generations. Is the noise meant to give you ideas for adjustments or am I missing something? Thanks for the clarification!
Well, I donât knowâŠ
No seeds
The thing is, we used to have access to seeds.
A seed was something like âthe same image, but with a touch - every time you used another seedâ.
So basically the approach (from Dall-E) for that specific image was the same (as long as you kept the same prompt).
Characters, lightning, composition, material design, colors, etc⊠were always kind of the same and the âseedâ only changed those things a little bit; giving you more variations for the same subject.
Every image (same prompt, different seed) was recognizable as an image 'made in the same session, with the same prompt, and the same âlook and feelâ⊠but different as well, because of the seed used.
Seeds 'r gone
Now seeds are gone, we have no seeds left (they are still used, you can ask them after a creation) but we can not force to set them our self.
The âscrambled textâ on the end of a prompt is âkind of a seed thing but not really a seed thingâ.
It alters small things (especially when you als use the same âreference idâ for an existing image), but isnât as powerful as a seed.
So itâs try and see what the âscrambled textâ is doing this time and if itâs a good thing or bad for your image.
The logic behind is, that the more complex / longer that scrambled text is, the more difference the image will be compared to the original one without any scrambled text.
You can alter small things this way, like hands, style, composition, etc⊠but you never know what comes outâŠ
Example
Redraw the cartoon image with ID gBbLqfGBAFyGcBm3 in the âLigne claireâ style, featuring Tintin and Snowy. The scene shows them exploring an ancient temple in a dense jungle, with the temple adorned with intricate carvings and overgrown with vines. The âLigne claireâ style is characterized by clear, strong lines and bright, uniform coloring, reminiscent of HergĂ©âs classic Tintin comics. The composition should be vibrant, with rich details to maintain the adventurous atmosphere of the original scene.
This is the original image (the image above, I mean)
I asked for the Generation ID of the first image and told him âcreate it again, with reference ID xxxxxâ (itâs for 99% the same)
I repeated the exact same prompt (as before, so with the reference ID) and added 16 scrambled characters; T@qOb$EWzfVJXi~d. You can see the dog is different as the people in front of the temple, but the image itself is still the same (look and feel)
Same prompt, reference ID, etc⊠but with 32 characters; JZsNKAXGUFoaDgiyntbCoeinsxtVWBLv. Interesting is that it is now full widescreen (which I always wanted, but it never did).
Finally, 64 characters added; 4KYtgPi+S0wW6a$6jm6K5=_!fGI8@zk3p2KNjsKzZ)wvY=h_FFb$lH)*D=aPjFSG. Tintin decided to walk away, lolâŠ
For those interested I can place the âcreate scrambled text scriptâ once again here (I did optimize some things, but itâs still PHP / JS / CSS in the end).
You canât predict the outcome, so it isnât as powerful as the seed (but itâs all we have at the moment).
Yes, I understand the seed versus no seed dilemma as that is how we used to get focused iteration in images. However, the addition of noise to the gen_id (which based off of our current understanding wraps all meta data) creates randomness in the image which is typically seen as undesirable.
In the interest of trying to nail down how these things all relate to each other, I have noticed that the creation of randomness in the image (via gen_id noise) acts in the same way that changing the seed slightly, while using an identical prompt, used to in the past. So, would it be reasonable to assume that the inclusion of additional noise on the back end of the gen_id is altering the seed in the same way we used to do it manually or am I way off now?
As a test of this can you use your original prompt and gen_id (unadulterated) for the very first test image above and simply add the following to the end of the prompt and let us know how it goes?:
Tintin is walking away.