DALL-E integrated into my favorite image editing tool

Hi guys, as I’ve been waiting for Dall-e 2 access, I’ve been watching everything I can on how other artists have been using and integrating Dall-e generations into their artistic workflows. I’ve been especially interested in the variations and in-painting features because it hints at the future.

I found this video by Bakz T. Future especially interesting.

Since I have time to kill before I can try it myself, I decided to mock up a visual concept of how I imagine Dall-e could be integrated directly into image editing tools like Affinity Photo or Photoshop using their plug-in architectures.

So in the image above, I can imagine a special type of layer called a “Prompt layer” which is tied to a Dall-e text input. So editing the prompt will cause the image layer to regenerate.There might also be a slider to indicate how closely to match the original image content of the layer.

Building on this idea, I can imagine adding another layer type called a “Segmentation layer” which would auto generate a segmentation map from the image layers below it. The segmentation map behaves similar to NVIDIA’s GauGAN2 AI paint tool. In the inspector you would see a number of color swatches that are used in the segmentation. As you click on the swatches you would see a text description above describing that segment of the image. If you alter the description it would alter the image content for that area similar to in-painting. You could also grab a paintbrush and alter the shape of the segment or paint copies in other parts of the image. Additionally, you could add or delete image elements by adding, deleting or painting over the segment. Finally, you could hit the “Extract new layer” button next to the segment swatches and it would create a new independent “Prompt layer” from the currently selected color swatch.

Update: After thinking about this a little bit more, I think it would make more sense if the segmentation map could be added to the “Prompt layer” in the same way a layer mask is added to a standard layer. This would insure the segmentation map would “travel” with the “Prompt layer” if the user repositioned it. Also it would be cool if the segmentation would update if the Layer’s prompt was updated with new text. I can also imagine selecting a segmentation color swatch and then add a sub-segmentation to it, thus in the case of the illustrated example, I could take the bug segment and segment it into body parts and then segment the head into more detailed facial features and so on.

Now that the segment is a new layer, you can manipulate it in all the ways you would a normal layer, like repositioning, masking or applying image corrections.

In fact the extracted image has automatically been removed from it’s background, allowing the layer to be repositioned, scaled or rotated anywhere on the screen. And of course Dall-e automatically fills in the hole left in the background image layer.

So in the example above, the subject has been made more prominent by scaling and repositioning. As layers are dragged around they can be regenerated to account for local context such as lighting or perspective angle. And of course the layer’s prompt could be modified to alter the image in more dramatic ways.

Because the Dall-e image can be broken down into layers, an artist could add additional pixel layers for over-painting or compositing.

So, what do you think? Would this make Dall-e more useful to artists, photographers and image editors?

Jason Wilhelm
jason@healthmotivate.org

3 Likes

I was imagining the plug-in built by OpenAi through their API, so image generations would be processed by their cloud servers instead of a local GPU. DALL-E Mini isn’t powerful enough and it’s output not high quality enough to make the plug-in useful in real artistic workflows. Of course OpenAi would have to monetize the use of the plug-in through some type of subscription service. I would gladly pay to super charge my existing workflows and maximize my artistic potential.

2 Likes