Hi everyone ,
I am trying to “embed” an image into another image. I see that Dalle supports editing an image using a mask, but I could only find the option of filing a mask using a text-generated image. (OpenAI API)
Is it possible to somehow “direct” Dalle to use another image to inspire the filling?
For example, I have a picture of the classroom and would like to replace the content on the board with a variation of one of my actual images.
I have considered fine-tuning the model but found that Dalle can’t be fine-tuned.
How would you approach something like that? Does StableDiffusion support anything related?
Just bumping this because I am running into the same issue/scenario. I would like to be able to have the image generator “embed” other images into it that are generated separately.
Right now, the only way I have found to do this is to:
- Generate a chatGPT description of said object
- Use a mask
- Use the description to fill in the masked area
However, the above does seem to fail quite a bit (doesn’t add the object to the masked area or just blurs it around and doesn’t really do anything). It would be nice to generate the images separately, then “embed” them onto a larger image after the fact.
Anyone got any suggestions?
I’m also pumping this because I’m running into the same issue!