I have a quick question: in the playground of DALL-E (edit image) it is possible to use the generation frame to create an image that is not in a square ratio.
Is there a way to replicate this with the api? Or is this not possible yet?
I am also wondering the same thing. From the API FAQs list it appears that you can, but I can’t seem to find any explanation on how you can perform outpainting with the API
So even though the API documentation asks for the original image and a mask image, it will also take just a single image with an alpha channel equaling zero for part of it?
The EDIT Image endpoint asks for image + mask, but I believe the variations endpoint just needs an image with some of it transparent…
I just realized there’s not an actual outpainting endpoint, but EDIT could be used… so, yeah, you would likely need the image + mask then put the pieces back together again…
Sorry about that… Still learning myself!
ETA: Yeah, I was looking at variations in the docs. Sorry. Lack of sleep!
So, the basic process would be something like this…
Crop certain part of image
Use that crop to create new image 512x512 (or whatever size) with the image crop taking up some of the image and a transparent background on the rest… (This would be your mask image)
Flatten that image and save as a new file … (This would be your image image)
Seems to me that only inpainting is available through the API. Would be great to know if anyone knows how to get outpainting programatically similar to the editor
Anyone know if there’s a DALL-E roadmap out there? I haven’t seen one, and it doesn’t strike me that they are really putting effort into it, but it would be nice to be able to use some more advance features with API based image generation. Specifying non-square aspect ratios would be helpful to me!