Well, I am not working @ OpenAI, and I am using this system just for a week (or even less).
And it’s not like Photoshop (that precise).
But in short, you can (within the same session) take a picture as a starting point.
Ask for it’s generation ID and for the next image set that as the reference image ID.
Than you can, in human language, alter the first image by saying what you want different.
The original image style, approach, composition, seed, design, atmosphere, etc… is captured in the generation ID and will be reused, as much as possible.
Example I

This was a base image, I wanted two woman, instead of a man and woman;

The new image was based on the first one, but I wanted them stay on grass;

Example II

This was the base image, but I wanted two Furbies;

The setting / Furby / look and feel is inherited, but I wanted one of them with a mask;

Again, it is “sort of” the same image, and now I want to see Gremlin ears;
