Hi everyone ,
I am trying to “embed” an image into another image. I see that Dalle supports editing an image using a mask, but I could only find the option of filing a mask using a text-generated image. (OpenAI API)
Is it possible to somehow “direct” Dalle to use another image to inspire the filling?
For example, I have a picture of the classroom and would like to replace the content on the board with a variation of one of my actual images.
I have considered fine-tuning the model but found that Dalle can’t be fine-tuned.
How would you approach something like that? Does StableDiffusion support anything related?