I’m testing the new gpt-image-1 API to swap my haircut.
Workflow:
Send my photo, a hair-only mask, and tell the image gen to change my hairstyle.
Tried first from a Swift app, then with the official Python example.
Problem: the model keeps regenerating the entire photo instead of just the hair, even though the mask gets accepted.
What am I doing wrong?
Thank you all in advance!
Here is my Python Code (with no reference Hairstyle. Only a prompt.)
def check_file_exists(file_path):
if not os.path.exists(file_path):
print(f"Error: File '{file_path}' does not exist!")
return False
return True
def modify_hairstyle(original_image_path, mask_path, hairstyle_image_path):
# Check if all files exist
if not all(check_file_exists(f) for f in [original_image_path, mask_path, hairstyle_image_path]):
return
# Initialize OpenAI client with the API key
client = OpenAI(api_key=OPENAI_API_KEY)
try:
# Open all files
with open(original_image_path, "rb") as img_file, \
open(mask_path, "rb") as mask_file, \
open(hairstyle_image_path, "rb") as hairstyle_file:
# Create the prompt for hairstyle modification
prompt = """
Apply the hairstyle from the second image to the person in the first image.
Keep the face exactly as it is in the original image.
Make the hairstyle look natural and realistic, matching the person's features.
give him a buzz cut.
"""
# Call the OpenAI API with both images
result = client.images.edit(
model="gpt-image-1",
image=[img_file],
mask=mask_file,
prompt=prompt,
size="1024x1024"
)
# Save the result
image_base64 = result.data[0].b64_json
image_bytes = base64.b64decode(image_base64)
# Save the image
with open("result.png", "wb") as f:
f.write(image_bytes)
print("Image generated and saved as result.png")
except Exception as e:
print(f"An error occurred: {str(e)}")
if __name__ == "__main__":
# Example usage with case-insensitive file names
original_image = "image.JPG" # Your original image
mask_image = "mask2.PNG" # Your mask image
hairstyle_image = "Hairstyle.jpg" # Image of the desired hairstyle
modify_hairstyle(original_image, mask_image, hairstyle_image)
Sadly i cannot attach my input images here:
An error occurred: Sorry, new users can only put one embedded media item in a post.
Here’s why the model regenerates the entire image:
Even with a proper mask, the model requires a clear prompt and clean input separation between what should change and what must remain untouched. Also, the image should be passed once, not as a list.
So frustrating, we’ve been testing this for the past 24 hours, trying everything we can, but yes, mask is considered ‘soft’ mask, not ‘hard’ mask so the model is literally making up things and changing the pixels that should not be touched…
We are reverting to another api we used that was doing a much better job in perserving the mask, though isn’t as creative and powerful as gpt-image-1.
Openai team, pleeeeeeease, fix it to treat the mask as a hard untouchable mask, it’ll be so powerful.
Also having the same issue. Sucks because the image generation is top notch just from a prompt alone (compared to Gemini for example), but it’s a deal breaker if it regenerates an entirely different image everytime. Also noticed that after each regeneration the image becomes slightly more yellow tinted. Hopefully this can be fixed with a GPT style model else we’ll be limited to diffusion workflows. Or limited to the applications this type of image generation may be useful for where true character consistency isn’t critical.
Also having this same problem. Masking boundary is not respected, entire image is regenerated.
The masking really is the key. With it working, this is revolutionary, the model really is incredible. Without it it has no use in my application, I think you’ll find many such cases.
@wendyjiao here is a request ID: req_4b831845738d9838fd35beb30f163ea7 you can see the text on the item is regenerated
Yeah, pretty sure it’s a soft mask because of the underlying technology and how it differs from DALLE models… I think they’re likely weighting the mask area more heavily or something, though. I’m trying to get more details…
Any news @PaulBellow , @wendyjiao ?
I tried several times, giving better prompts and ensuring the mask meets the requirements. Im also facing a “soft” mask but not the expected hard mask.