Is there any plan to bring back an image-only generation process instead of the current text-integrated model? I preferred when DALL·E would generate multiple versions of an image, allowing me to select and refine aspects from each to better match my vision. The current system lacks that flexibility, making it significantly less useful for creative iteration and much less appealing to use.
1 Like
Hi @jshea8. And welcome to the forum.
As far as I know, I guess this was mainly to train the model, imho or as well to help people select which version they prefer more.
BUT, anyway it could be a feature as well. That’s an interesting suggestion