Editing and adding to generated images with selection tool

I’m trying the new editing tool where you click on a generated image and make a selection to edit by a prompt.
It is supposed to both add, edit and remove stuff from that selection, and otherwise keep the image as is. This is a great feature!

However i cannot add new things in the image, only remove stuff (which is great when you suddenly get 10 people and not just the 2 you asked for…)

If I choose a selection of lets say som grass on the ground, and I write “add pink flowers” it only removes the grass and adds nothing. Or If i select a piece of the sky and say “add a hovering white helicopter” it simply does nothing.
It seems like if the selection includes some distinct feature, it just removes that feature no matter what you write. I saw something weird in one image, selected it and wrote “what is this?” and it just removed it.
I also asked to change one type of helicopter into another specific type, and it simply removed it. (But it tells me it has changed it to the new model.)

If the selection has no distinct features, like a piece of the sky, it says it has added what you asked for, but in fact has done nothing.

Any tricks to get DALL-e to add and edit stuff in this way, and not only remove stuff?

Original:

To modify, I give a version of the whole image description the way it should now appear:

I doubt you’ll have much luck giving aircraft model numbers; it is not very likely that experts in identifying them were employed in labeling training images. Plus the image model simply has problems making many things despite how explicitly you try (a semi truck viewed directly from the side comes to mind).

Dalle3’s image selection tools, while interesting, are still basic. If you want fine-grained control over selections and generations, download the image into Photoshop.

I find Dalle is usually gets my image in one shot from a verbal prompt, if it gets it at all.

Adobe, on the other hand, works much better with Dalle’s ability to generate a cohesive image all-of-one-style, then works within that style to generate changes. Descriptive verbal prompts tend to result in more hallucinations.

Fun picture of an undead army… but there were some issues.

With some trial and error, I managed to get the selection tool to grab my necromancer and make him look more consistent with other illustrations—but try as I might, I couldn’t get my Bone Golem to “not hold anything.” These changes were more sophisticated than I’ve been able to achieve with Dalle before, so it’s definitely learning.

And there are other details which weren’t worth going after, given Dalle likes remake the entire image if you’re not specific.

Firefly did well removing that green fruit-loop thing while maintaining the original style. A Curves Adjustment Layer was used to change darken the coloring, and the branding is added in a final step.

(You can add branding like this to a Dalle generated image by instructing a model to use a python assembly step, but it’s still kind-of a pain vs doing it manually.)