Issue with Generating Less Hair Using DALL-E Model

Hi everyone,

I’m currently working on a project where I need to generate images of babies with very fine, subtle, straight hair. However, I’m encountering an issue with the DALL-E model consistently producing images where the babies have more hair than desired. Specifically, I need the hair to be barely noticeable, without any visible parting, and lying very close to the scalp—essentially, the typical characteristics of baby hair.

Despite refining the prompts to emphasize these features, the generated images still show more hair volume than expected. Here’s an example of the type of prompt I’m using:

“Visualize a baby with very fine, subtle, straight hair, barely noticeable, without any visible parting, lying very close to the scalp, which is a typical characteristic of baby hair.”

Original Prompt:
“The image should be rendered in a 3D animated style. Image Prompt An image showcasing a focused 10-month-old baby boy engaged in the delicate task of stacking colorful ring toys. He is wearing a bright orange shirt with a bold, blue number ‘10’ prominently displayed, symbolizing his age. The baby is sitting on a soft, light gray play mat, using his index finger and thumb to carefully place a green toy ring on a stack. Baby should have very fine and straight baby hair, barely noticeable, without any visible parting, lying very close to the scalp. Character Traits*: The animated character in the image exhibits several distinctive features: large, expressive eyes to convey a wide range of emotions; and soft, rounded facial and body features for an adorable, approachable look. The characters in the image should embody features of the Caucasian race. 3D Animated Scene Style The scene uses vivid colors typical for young audiences, conveying innocence, playfulness, and curiosity typical of children’s entertainment. The animation style is similar to modern children’s movies, with detailed textures and vibrant lighting. The overall design emphasizes innocence, playfulness, and relatability, common in animated characters for family or children’s entertainment.”

Results I am getting:
Screenshot from 2024-05-30 09-24-50

However, hairs should look like this:
image
image

Has anyone else experienced similar issues or have any suggestions on how to better guide the model to produce the desired hair characteristics? Any advice or tips would be greatly appreciated!

1 Like

This existing prompt is 245 tokens, right at the limit of what DALL-E 3 can actually take, and more than the AI placed in front of it is instructed to send. So you might have loss of quality simply because of the large input and the end potentially being truncated or the whole thing rewritten.

Then: consider DALL-E as a hyperactive keyword engine. It sees the word “hair” - it’s making hair. It sees the words “no hair” – it’s still making hair.

A bald toddler comes out quite bald. The challenge is in striking the right balance without a confusing disarray of compositional elements.

I’m sure there’s some suggestions that you could layer on top of bald to get what you desire. Hopefully it is not as brazenly disobedient as when you want a clean-shaven non-model man.

@_j is the token length of DALL-E 3 not 4000 as per the docs?
https://platform.openai.com/docs/api-reference/images/create

It is an API character length of 4000. That might be 1000 English tokens or over 4000 tokens in a poorly-compressible world language.

You can send that length to the API, like if you were to paste a section of a book to illustrate.

That however will need to go through instructions to rewrite into 80-100 words.

I think there is a broader issue with DALLE-3 and hair. For example I’ve been trying to generate an image of a man with no facial hair and I’ve tried several prompt adjustments. Every time I got a picture of a man with facial hair, sometimes with a lot of it.