I’m loving the generations, but by setting:
openaiClient.images.edit({
…
background: “transparent”
});
I sometimes get generations where my characters have transparent holes in them.
This is upped to almost 80% of the time if I generate something that already has a big white body part (like belly or eyes) as with the penguin attached.
Not sure if this is fixable per se, but making another edit request wit the image attached and a prompt of “fill in the penguins stomach” sometimes work… although it makes the entire image even more yellow-tinted which is another problem in itself.
It looks cute nonetheless.
(EDIT) Omg I just realised it looks like its made of wood! Imma try to 3d model it later so I can have it as a statue in my room!
Appreciate the fixin uppin, as he is one adorable little wiggler.
Do agree with you on the the tech comment though, I might be best served by promting for a white background and then rather fire it off to an another api somewhere for bg-removal.
Though that would probably double the waiting time for the end users, hmm.
I also encountered this, particularly when there’s white space in the middle of the image (or even the color white!). As a workaround, I added an extra clause to my prompt that specifies the center of images shouldn’t be transparent, just the surrounding space around it.
The design should appear as a clean, isolated standalone 3D model with transparent white space surrounding it, suitable as a UI element within an app. Ensure there is no transparency within the 3D model itself, just around the standalone model.
Using AI for image modification / generation isn’t necessarily my thing, but I do find this pretty entertaining knowing that even ChatGPT has problems with background removal tools.
When I use Photoshop I’m almost always tweaking knobs and dials to make sure it either doesn’t remove too much or removes too little to be useful. And a good 90% of the time it’s always tolerance I’m adjusting.
I’m guessing there’s no such parameter to explicitly define here? I haven’t checked the docs yet.
If there isn’t, I’d highly recommend considering some of the commonly-known parameters for things like this that you typically see in image editing applications as a feature request.
Whereas text generations have fine-grain settings like top-p and temperature or whatever, image generation tools and edits like this might benefit greatly from tolerance or feathering parameters to tune the edit to be more or less sensitive, or to whatever is needed.
Something like that could certainly be an interesting excuse to help wean me off the Adobe ecosystem and free myself from their gold-plated chains, that’s for sure. Either that, or Adobe finally provides some version of Linux support, maybe like how Valve did it with gaming. Which, unfortunately, is probably going to happen after the heat death of the universe, shortly after the release of GTA 6.