Dall E refuses to listen to my prompts and inputs and instead creates a very different image

Every time I add a picture to chatgpt and for example I say “make this person look angry” it always seems like they are meant to make the person look as attractive as possible by giving them cheekbones and clear skin and a chisled jawline and ends up looking nothing like the person which really sucks. Ive tried adding multiple pictures to help but it does not


Here us an exampls

Unless they changed something, chatgpt will use a gpt-4-v model to look at your images and then generate a text description of what it sees, before sending that text description off to dall-e.

That means it’s virtually impossible to create a deepfake of yourself from a technical standpoint at this time.

Additionaly, consider that Dall-e and chatgpt are trained/instructed to not generate photorealistic images of real people. So even if it was technically possible, they probably won’t allow it for ESG reasons

Sorry :confused:

4 Likes

Agree with @Diet. It works well considering how it works internally. The created photo has relatively similar charasteristics.

Photos => description with vision model
Description => prompt for DALL-E 3

I think you can say it “Please tell me the prompt you used to create the image” so you will see what gets sent to DALL-E model. You can then edit the text prompt yourself and ask to draw again.

2 Likes

that helps alot with my question still very annoying though.

thanks but is there a way to directly send the images to Dall e like before?

ESG, of course, stands for Erosion of Structural Governance.