Feedback on Rendering Accuracy and Efficiency in Image Generation

Dear OpenAI Team,

I’d like to provide feedback on my experience with the DALL-E image generation process through ChatGPT, specifically regarding repeated issues with rendering accuracy that led to unnecessary delays.

The main issue I encountered was that the AI consistently failed to remove certain elements from the images despite clear instructions. For example, when I requested an image without wing-like or angelic figures, the AI continued to include these elements, even after multiple corrections. Another example was when I asked for an image of a black hole with no stars near it, but the AI repeatedly generated images containing stars.

These mistakes forced me to request image generation several times, which not only slowed down my progress but also monopolized the system’s resources—resources that could have been available for other users if the image had been accurate from the start.

To improve the experience, I suggest:

  1. Enhancing the model’s ability to accurately follow specific user instructions (e.g., explicitly excluding certain visual elements).
  2. Improving the system’s responsiveness to corrections, so it doesn’t generate repeated errors that frustrate users and overload the system.
  3. Ensuring that initial renders more closely match detailed user prompts to avoid wasting time and resources on multiple iterations.

I believe these improvements would help make the tool more efficient, freeing up capacity for other users and providing a smoother experience overall.

Thank you for your consideration.

Best regards,

Stuff like this work, describe it not using descriptors . Black hole esq spiral on black background

Holy beings floating in air


From my experience naming it to get rid of it don work its best to describe things not using the words you don’t want.