Seeds gave me the ability to create multi-part, long term projects that required character consistency. This change also impacts the symbiotic relationship between chatgpt and dall-e. I don’t want to go to midjourney, but this nerf is really messed up.
It changed the description before which added noise- that change doesn’t appear to be new only implemented differently imo. The issue is for me is this change completely nerfed continuity of characters. The whole point of Dall-e is to generate the art in your head in a format other people can see. It will work fine if you want to put a bulldog on a surfboard just screwing around, but it’s not in the dust heap for my purposes.
It’s not effective for my use. The changes have nerfed my process to the point where I’m considering moving to MidJourney. Until the update, I was completely sold on the chatgpt/dall-e interoperability- as it functions now - it’s not worth 2 cents for my purposes.
Do you use the
Image generation ID works when your 2nd image’s description refers to the 1st one via
It only works in the same conversation.
I’m aware it’s session limited now. That’s not the issue. The issue is you can’t use image generation id the same way you could seeds for cross project continuity. Does that make sense?
The good news is that DALL-E will support “image reference” (which is different from
referenced_image_ids) in the next version.
See DALL-E 3 AMA:
I’ll test with it but I’ve tried using the refs id and multiple workarounds with horrendous results. I hope they fix it.
Yes, I had wonderful results prior to the changes. Using the refs method did not improve results. Even with explicit instruction to only change x out of the initial prompt and use the ref image, the images look completely different now. When I ask the AI why, it’s not quite sure why it’s not working and sometimes tries to tell me to use the seed method which no longer works. It’s frustrating.