Having trouble getting transparent backgrounds in ChatGPT images

That’s what I noticed on my image (in image gallery that I’ve linked ) that there still was little bit of the chessboard background. But the lion in this thread, was a success and also ChatGPT said that sometimes it works cleanly, sometimes not. Maybe it has something to do with complexity of the image?

@_j if you want, I’ve shared a python code on that image gallery, to test if you maybe get better result with it than me?

Stupid question from a non-designer, but how do you alias correctly for an arbitrary background?

Is it optimal to create the image with a specific background colour to begin with anyway?

If you want you can check the link that I’ve linked, in that image the background was complex and if I’m correct, color of the background doesn’t matter as much as a clear prompt (this is just what i gathered) there are more talented than me and true expertise when it comes to alpha transparency :sweat_smile:

I think this is the answer (in the question)

That could absolutely be useful with this new gpt-image-2, to have a specific opaque color background before editing alpha transparency. Although with other models on API, there’s no need for “before background”, as I tested and shared in this thread.

I think we have a true challenge with alpha transparency and gpt-image-2…for a while at least :sweat_smile:

@_j, what an interesting discovery.
When it ‘cut’ these, as it dubbed it, I didn’t look at the pieces while zoomed in, you can see that line here when you do..

Thanks for looking into that.

Currently will choke on trying to handle all of these pieces at once, but I’ve also learned that this many cuts just need to be split into two jobs, not asked for in a single prompt, because of how it handles the pieces while still cutting -
But I wouldn’t know anything by a technical term…

I can cut the whole thing in half and run both turn it into a zip and let you download it and the nine slices sizes and such.

it so many little files you ahve to cut a full image in half or you’ll choke i think.

bright green offensive background

What the response to ‘cut them’ looks like:

Summary

from PIL import Image, ImageDraw
import numpy as np
from scipy import ndimage
from pathlib import Path
import json, zipfile, os, textwrap

src = Path(“/mnt/data/d47ae531-e010-4801-8291-42f47ca31625 (1).png”)
out_dir = Path(“/mnt/data/promptforge_hoverdeck_jewels_extracted”)
out_dir.mkdir(parents=True, exist_ok=True)

img = Image.open(src).convert(“RGBA”)
arr = np.array(img)
r = arr[:, :, 0].astype(np.int16)
g = arr[:, :, 1].astype(np.int16)
b = arr[:, :, 2].astype(np.int16)
a = arr[:, :, 3].astype(np.int16)

Tight chroma-green mask. This removes the generated green-screen without eating the teal jewel.

bg_mask = (g > 180) & (r < 90) & (b < 120) & (a > 0)
fg_mask = (~bg_mask) & (a > 0)

labels, count = ndimage.label(fg_mask)
slices = ndimage.find_objects(labels)

components =
for i, sl in enumerate(slices, start=1):
if sl is None:
continue
ys, xs = sl
area = int((labels[sl] == i).sum())
if area >= 1000:
components.append({
“label”: i,
“x1”: xs.start,
“y1”: ys.start,
“x2”: xs.stop,
“y2”: ys.stop,
“area”: area,
“width”: xs.stop - xs.start,
“height”: ys.stop - ys.start,
})

components = sorted(components, key=lambda c: c[“x1”])

names = [
“hd_jewel_slot1_idle.png”,
“hd_jewel_slot1_selected.png”,
“hd_jewel_slot2_idle.png”,
“hd_jewel_slot2_selected.png”,
“hd_jewel_slot3_idle.png”,
“hd_jewel_slot3_selected.png”,
“hd_jewel_slot4_idle.png”,
“hd_jewel_slot4_selected.png”,
“hd_jewel_slot5_idle.png”,
“hd_jewel_slot5_selected.png”,
]

if len(components) != 10:
raise RuntimeError(f"Expected 10 jewel components, found {len(components)}")

Keep every asset on the same canvas for easier WPF alignment.

pad = 24
max_w = max(c[“width”] for c in components)
max_h = max(c[“height”] for c in components)
canvas_w = max_w + pad * 2
canvas_h = max_h + pad * 2

export_manifest =
for comp, filename in zip(components, names):
x1, y1, x2, y2 = comp[“x1”], comp[“y1”], comp[“x2”], comp[“y2”]
crop = arr[y1:y2, x1:x2, :].copy()

# Remove green inside each crop using the same tight mask.
cr = crop[:, :, 0].astype(np.int16)
cg = crop[:, :, 1].astype(np.int16)
cb = crop[:, :, 2].astype(np.int16)
ca = crop[:, :, 3].astype(np.int16)
crop_bg = (cg > 180) & (cr < 90) & (cb < 120) & (ca > 0)
crop[crop_bg, 3] = 0

# Center the crop on a shared transparent canvas.
asset = np.zeros((canvas_h, canvas_w, 4), dtype=np.uint8)
dx = (canvas_w - crop.shape[1]) // 2
dy = (canvas_h - crop.shape[0]) // 2
asset[dy:dy + crop.shape[0], dx:dx + crop.shape[1], :] = crop

out_path = out_dir / filename
Image.fromarray(asset, "RGBA").save(out_path)

export_manifest.append({
    "file": filename,
    "source_bbox": [x1, y1, x2, y2],
    "source_crop_size": [comp["width"], comp["height"]],
    "output_size": [canvas_w, canvas_h],
    "padding_canvas": pad,
    "component_area": comp["area"],
})

manifest = {
“source_image”: src.name,
“component_count”: len(components),
“extraction_method”: “tight chroma-green mask + connected-component detection + left-to-right sort”,
“background_mask”: “G > 180 and R < 90 and B < 120”,
“output_canvas_size”: [canvas_w, canvas_h],
“resized”: False,
“assets”: export_manifest,
}

manifest_path = out_dir / “jewel_extraction_manifest.json”
manifest_path.write_text(json.dumps(manifest, indent=2), encoding=“utf-8”)

Build a simple checkerboard preview contact sheet for visual QA.

checker = 20
cell_w, cell_h = canvas_w + 50, canvas_h + 80
preview_w, preview_h = cell_w * 5, cell_h * 2
preview = Image.new(“RGBA”, (preview_w, preview_h), (18, 18, 18, 255))
draw = ImageDraw.Draw(preview)

def make_checker(w, h):
bg = Image.new(“RGBA”, (w, h), (210, 210, 210, 255))
d = ImageDraw.Draw(bg)
for yy in range(0, h, checker):
for xx in range(0, w, checker):
if (xx // checker + yy // checker) % 2:
d.rectangle([xx, yy, xx+checker-1, yy+checker-1], fill=(150, 150, 150, 255))
return bg

for idx, filename in enumerate(names):
row = idx % 2 # idle/selected in two rows within each slot pair? Actually order alternates, so use x by slot, row by state.
slot = idx // 2
state_row = idx % 2
x = slot * cell_w + 25
y = state_row * cell_h + 40
bg = make_checker(canvas_w, canvas_h)
asset = Image.open(out_dir / filename).convert(“RGBA”)
preview.alpha_composite(bg, (x, y))
preview.alpha_composite(asset, (x, y))
draw.text((x, y - 24), filename.replace(“.png”, “”), fill=(255, 255, 255, 255))

preview_path = out_dir / “jewel_extraction_preview.png”
preview.save(preview_path)

Zip the output folder.

zip_path = Path(“/mnt/data/promptforge_hoverdeck_jewels_idle_selected_v01.zip”)
with zipfile.ZipFile(zip_path, “w”, compression=zipfile.ZIP_DEFLATED) as z:
for p in sorted(out_dir.iterdir()):
z.write(p, arcname=f"PromptForge.App/Assets/HoverDeck/Jewels/{p.name}")

print(f"Created {len(names)} transparent PNG assets")
print(f"Output folder: {out_dir}“)
print(f"Canvas size: {canvas_w} x {canvas_h}px”)
print(f"Zip: {zip_path}“)
print(f"Preview: {preview_path}”)

hmmmmm.

:man_facepalming:

Yes, tell us more about “cutting” on the transparency thread… :man_facepalming:

is that what I was doing bleat?

Or are you having another episode where you dress basic technical documentation or reference up as a reason to attack people again and cite pomp?