I have been running into a lot of bias recently, particularly in image generation. I have been generating images of African settings and it often adds elements of poverty (I understand why this may happen, and add details to prompts to avoid this).
In one image it added a dirty t-shirt to a character, I asked chatGPT for an edit to make it a clean shirt and make no other changes. It made the child white and the setting totally different.
Have you tried the 4o model image generation in comparison?
The new model variant is definitely different, but also more susceptible to context. I assume this can go either way.
After uploading image, you may use this prompt, or you can describe his t-shirt:
Keep the entire photo unchanged except for the boy’s t-shirt. Replace it with a clean, well-fitted navy-blue suit, white shirt, and deep red tie. Do not alter his face, pose, skin tone, lighting, or the background crowd.