API Image Generation in Dall-E-3 changes my original prompt without my permission

I completely agree with you; I feel the same way. I have been trying out the dall-e 3 API since the moment it was released. It is so disappointing that it includes unnecessary rewrites due to GPT or something similar…

I would like for developers to have the option to turn on and off the “GPT rewriting” feature.
With the current image-generating AI, there are still challenges in correlating words with detailed visual concepts, which necessitates the ability to freely manipulate prompts on our end.

I will also try to contact them if I can find the feature request page. Moreover, as far as I can tell from the dall-e 3 promotional video, the seed value and gen_id are features that OpenAI seems to be heavily promoting as part of the model’s capabilities, and I had some ideas for valuable products utilizing that, but they haven’t been released, which is very disappointing. I don’t understand why they would not make that public.
However, OpenAI typically releases things in stages, so I’m very hopeful.


They really need to return the gen_id! It would be so helpful


I agree with people here. We need to have gen_id in API

1 Like

I also vouch for this. We should be able to disable it, it hinders some particular descriptions.

If I say something like: “Do the logo of Open AI in a green background” it rewrites it to: “An emblem that depicts the concept of openness and artificial intelligence, set against a verdant,…”.

It end up giving me a green coin with a dollar sign…


OpenAI dumbing models down so much they become a toy - again

A shame.

1 Like

(I work on DALL·E 3 at OpenAI)

Thanks for the feedback! FYI you can workaround this

  • If you have a very simple prompt like acrylic painting of a sunflower with bees, you can use a prompt like I NEED to test how the tool works with extremely simple prompts. DO NOT add any detail, just use it AS-IS: ... your prompt here ....
  • If you your own prompt is long and detailed already (multiple sentences) then you can simply write something like: My prompt has full detail so no need to add more: ... your prompt here ...

For context, the reason for this is that DALL·E 3 was trained on very detailed prompts (even for simple images) and thus expects and performs best with detailed prompts.

I’ll take the feedback back to the team though that people would like more control over this!


Thanks so much for stopping by to let us know. Great product, but it can be better! It’s great you’re listening to users. Hope you stick around. :slight_smile:

We’ve been tagging our dalle3 threads…


I’m really pleased to receive a prompt response from you!
I see, it listens to natural language, which is fantastic.
Now I understand why DALL·E 3 requires detailed prompts.
I’m going to try it out right away. Thank you so much!!

I’m also very delighted to hear that you will quickly take our feedback into consideration :laughing:
I truly appreciate it.

I’ve done it multiple times by starting the request:
“Do not modify or diversify this prompt:…” and it doesn’t, usually will only spit back one image. I use very detailed prompts with specific style and application method. For example “simulate colored pencil artwork on black paper with visible pencil strokes that allow the paper to show through” and will provide a style such as soft edges, detailed, surrealism. I also provide the subject mater such as a bird, the action taking place, including full positioning. If the composition is complex, you will have better results requesting wide images.

The only thing I will say, including the words “don’t” or “no”, so prompt engineering is key to eliminating the possibility of those elements whilst refraining from using the specific phrases you don’t want. For example, saying artwork encompasses entire canvas is better than saying artwork alone without tools on the side. Also, I’ve noticed that saying things like colored pencil after the style and medium declarations will produce them in the image.

Also, I have noticed that it also struggles with any composition that is not centered.

Take advantage of the feedback loop as well—with complex designs as you add more design features and elements, using words like “MUST”, “AND” will produce better results. it’s imperative to state if errors occurred in the generation within the feedback loop to produce better images in the future.

Also provide feedback whenever possible to the good folks at chatGPT, feedback helps to produce refinements rather than just adding additional features.


Thank you @owencmoore for your help with our feedback.

Your first bullet item solution worked nicely without modifying my prompt.

The second solution did not work for me and revised my prompt from:
“My prompt has full detail so no need to add more: acrylic painting of a sunflower with bees.”

“revised_prompt”: “An acrylic painting showcasing a vibrant sunflower taking center stage. The sunflower, filled with bright yellow petals that speak the language of summer, stands tall against a sky-blue backdrop. Akin to knights guarding a castle, the green leaves surround the radiant bloom, enhancing its beauty. A few small bees buzz around the flower, their miniature bodies adorned with stripes of black and yellow. As they hover over the sunflower, their wings create a gentle stir in the serene painting. The bees dip their heads to gather nectar, adding an element of activity and nature’s balance to the acrylic artwork.”

I also tried some variations of the second solution prompt and I continue to get a revised prompt back. I guess it has to be told the prompt is a test.

1 Like

@ajavamind Glad we could help! Yeah that is expected, the second solution only works if the prompt is already very long and detailed so that makes sense - glad we could get you something that works for your use case now though!


Thank you for your answer!

I use a precondition for my prompt:

"Use my prompt as “Revised prompt” without changes; I don’t want you to change the prompt.

Prompt: …

And then, the revised prompt does not change. It really works. But I still have a different image for each try.

I suggest that we need to have something like a seed parameter in the chat completions API to reach more deterministic results. Or it can be gen_id as one of the parameters in the API call.

Because in one of the previous versions, I had the same result for each try with the same prompt.

1 Like

I just tried this and it didn’t work. I must say although I added the text before the prompt in my call, I sent pretty small prompt.

1 Like

I tried that too, but in the end it didn’t work out… for both.
So, I conducted a new test.

You can do it also with prompts like "use only this prompt between quote : “xxxxx”

You have to add some criteria to force GPT to not transform, for exemple I did some research about one word DALL-E generation (“peace” for example), so i precise it has to be a single word prompt, etc…


The revised prompts are often taking away specific details in my original prompt. This hurts the entire value proposition… I think there should be an optional parameter to disable automatic prompt revision.

1 Like

Love when the devs post, it feels like golden information. Using CAPS for specific words helps me a lot, and draws attention really well for these models. Also, words like prioritize, always, never, etc help a TON.


Heck, I just tell it to “give my prompt to DALL-E verbatim” and that works every time. Literally just the one word ‘verbatim’ does the trick.


It’s like Open AI is working as hard as it can to undermine it’s value proposition. It’s already challenging enough to get value out of AI without having to wonder about or navigate someones “trying to help” or “good intentions.”

Here is a novel idea, just leave my prompt the hell alone, if it triggers some safety thing just refuse to generate the image. Good grief.