I just subscribed and spent the last 4 hours trying to get DALL-E to produce the image I wanted

And after an hour or two it tells me you’ve reached your limit, but the reason you’re going so hard is because it isn’t giving you what you are asking for.

For some reason DALL-E struggles ALOT with “full body” images. it’s like it can’t review its work against the details of your requests. I will tell it to parrot back to me it’s understanding of what I want and it will paraphrase it even better than I did, so I KNOW it understood the assignment, but doesn’t produce it.

Other than pointing out this frustration with an otherwise phenomenally creative and powerful product I have one question. Will DALL-E ever get the ability to produce an image, hear your requests for changes, and tweak the SAME image rather than producing a whole new one?

This is by far the worst part of using DALL-E. Being able to revise and edit the same image over and over is VITAL for precision image work and artistry. It’s like ordering a salad and telling the server you want more croutons, and she takes the salad and brings you back one with more croutons but a completely different salad. And every tweak you add is applied to a whole new salad.

The ChatGPT AI cannot see the contents of the image that was actually produced by the separate DALL-E 3 image creator. It only sends words, and you see the results.

ChatGPT does have the ability to send back a “previous id” when creating another image, so an image can be continued upon in the same style. This ID also wouldn’t have any ability to “see” the image, but instead starts with the same seed parameters of generation, so that the new image is closer to what was previously produced.

That does not mean the same AI prompt and refinement upon the same image though: the writing of instructions is still up to the AI which still has the same possibility of failures to produce what might have been described - even in the refined version.

1 Like

I see, so is it possible to communicate with DALL-E without chatgpt being a middlebot so to speak?

DALL-E 2 can be used with direct prompt by either a different prepay credit system at labs.openai.com, or by API call.

That gives you only the input of “prompt”, and no ability to use or change other internal parameters that add variety to the images. No memory.

DALL-E 3, OpenAI decided to put in ChatGPT Plus instead of a separate site just for paying for images. ChatGPT has a long list of instructions for rewriting what you’d tell it to produce as a prompt, so it takes jailbreak-like convincing to get the AI to pass your words directly to the image creator. It has the “previous image ID” tool.

DALL-E 3 by API, where you pay per image, also has a similar prompt-rewriting AI placed in front of the image maker. The rewriter AI currently can be broken down to pass exactly your language, but in many cases, that might not be as high of quality as the rewriting enhancement (where the AI fills in more imagery detail language for you). Still no memory or seed to make close variations on what was produced before.

1 Like