The new image generation is surely great, but it has several big limitations.
- Generation is slower (1 min and half for the new process, less than 10 seconds for Dall-E)
- you can only run one generation at the same time. I don’t usually start several, but sometimes when I see that it is not what I want, I start a new generation… however, sometimes I was not stopping the failed generation, just to see what Chatgpt wanted to draw for me.
- The new image generation seems an expert at reproducing things (only for things that he was trained for, I have just hurt a problem to reproduce an… axe). However, the new image generation seems unable to imagine. As a result, il you don’t give a picture or provide enough details, the image will look a little empty
It seems that the new process is going to take lot of resources (I guess it is, when I see new limitations and also the time needed to generate 1 image).
And so, I wonder if the two processes are not aiming different publics. And so, if we could not still use Dall-E in our GPT custom?