I would like to emphasize the reproducibility-aspect. Could some of the more experienced users share some “templates” for prompting (meaning: the structure that makes it most reproducible).

On the same topic (earlier was mentioned, not to use modern artists - i assume due to copyright-issues - as “template”?): How does it look if i upload (and that is the keypoint of my question in this context) my own works (for example a character in different situations, all the time drawn in a recognizable manner) publically with an according license (CC/PD) - aka put it into human long term mempory/the internet?

Will this art be available to Dall-E right away, or does it apply some content-policy like must at least be published/viewed for n consecutive days or whatever?

It might seem i go offtopic, but i am in the exact same situation as the original poster, namely trying to create illustrations that share the same feel - just in my case i’d like to provide the general idea myself by sharing the art to be based on publically beforehand. Hence me treating this request as building on top of what was discussed.

Any insights appreciated.

Meaning: I have no intent of hi-jacking this thread, but would like to get your thoughts in the OP context - as a variation of the original question, that might lead to an answer for the OP in the end. I mean who knows how rudimentary an art is allowed to be to be of use to Dall-E 3 - so OP might be able to use this way of creating generic sketches/scribbles publically to guide/steer/template the actual request that way?

1 Like

I mean, at this point, I’m not sure they would be used to train new models. AFAIK, OpenAI used bought datasets with rights which is why DALLE2 quality was lower than other models that trained on a lot more photos/art/images.

I would just stick to forcing the exact same phrase for whatever “style” you want to hope it crosses over to the new scene or whatever…

The technology is definitely moving fast… Here’s some old GAN stuff from just a few years ago…

bluebird

insectb1

3 Likes

Good find! I’m having some success with creating characters such as in the OP. Somewhat less success if trying to describe a human character as I find you need to be much more descriptive with their characteristics (which, in turn, leaves you a bit less wiggle room to describe activity in the rest of the prompt)

Thank you so much Paul. I’ll give that a go this evening. I’ll let you know how I get on.

1 Like

Thanks Elmstedt! I was wondering how best to leverage the Customer Instructions. I’ll give that a go.

2 Likes

One note I’ll make. This is great for having a pretty consistent style but it doesn’t really work for perfectly consistent character design.

1 Like

Concur. I was able to get close, but there’s obvious differences if you look at them all together. The tech is coming along, though. I bet DALLE4 will have it… we can hope. Until then, I’m gonna keep tinkering with prompts.

1 Like

Here is what I am imagining we will see relatively soon-ish (2 to 4 years tops).

  1. Upload a reference image of a person or a sketch (or just a good text description of a character you want to create)
  2. Ask for a character model sheet to be generated of that character including various poses, facial expressions, and details of the character’s design, from multiple angles to give a comprehensive understanding of the character’s appearance and personality.
  3. Put that character model sheet (and others) into context for the model to reference when creating new images of that character.

I’ve not been paying too much attention to what the Stable Diffusion folks have going on these days, but even a year ago they were doing amazing things with consistent character generation, so I am sure this is probably already a thing or very close to a thing.

Actually there is a way to do this, but I think you’ll need to cross with midjourney. Here is youtube video on the process:

2 Likes

DALLE3 definitely does have its limitations in terms of graphical generations and photo manipulation, but midjourney is good for what you’re looking to do. So you can sketch or draft your characters on DALLE3 then upload image on midjourney to create a consistent character you’re looking for.

Yeah, I’m sure there are plenty of ways to do it.

I remember people fine-tuning LoRAs for individual, reproducible characters in Stable Diffusion upwards of a year ago.

Unfortunately, any commercial entity with deep pockets that wants to provide access to a generative image AI needs to fear the two threats of copyright holders and deep fakes.

So, no matter what the models are capable of there will need to be guardrails in place that will have the effect of reducing their utility.

You can request the seeds of the image and use that to recreate similar images with different variables.

for example you DALLE3 creates a superhero then you request the seeds from DALLE3 then you’re able to take the seed to recreate similar images or characters:

{
“prompts”: [
“Illustration set against the backdrop of Santa Monica Beach under a starry sky. The beautiful girl, with a clear face and no glasses, looks on with a mix of concern and awe. Beside her, the superhero is in a powerful stance, fiercely tearing away his tuxedo to reveal his superhero cape and uniform underneath. The scene captures the moment of transformation, with the fabric of the tuxedo rippling in the air and the emblem on his superhero uniform shining brightly.”
],
“seeds”: [4202394079],
“size”: “1792x1024”
}

but if you create a new chat and use the same seed it gets pretty wonky… so i recommend completing the character or goal with the consistency you’re looking for in one chat. I’m sure there is a way to use that same seed one month later to update the character, but you may need to tinker with it from then

Hey!

Just saw this long post in X trying to do this exact thing. Haven’t tried myself yet but try it out and see if it works.

Thread by @chaseleantj (Chase Lean)

1 Like

Looks like it might work if I ask Dalle to produce a grid of cards

1 Like

A grid prompt always works.

Create a “four panel comic about a fluffy bear” and the bear will be the same inside those four panels.

You can iterate on that image by using the gen_id of the image and refer to it by its referenced_image_ids parameter.

Great article here on Medium title : 99% character consistency with DALL-E 3

Can’t post a link here but I’m sure Google will find it for you.

Is it this?: 99% Character Consistency with DALL-E 3

1 Like

Yup. I’m working on expanding the prompts now to see if I can stretch it to 24 scenes.

1 Like

I’ve found that sometimes DALL-E will produce these grids of images if you ask for two variations on an image.

For example, in this case, I asked DALL-E to “Give me two variations on that image.” I expected it to give me two images, instead, the output was a single image, but divided in two (below).

1 Like

A grid of images is really interesting, and also sparked an idea:

Ask it to create a grid with 16 panels and then upscale them with another AI (for now). Should get you 16 high-resolution images.

Bing DALLE3:


a grid that consists of 16 panels, each showing a teddy bear from different angles, in the style of Roald Dahl

Nero AI Upscaler:

Of course, not perfect since it leaves some artifacts, but now you have 16 images in the same style. The question is if all 16 descriptions will fit into one prompt…