Anyone else getting crap outputs from DallE3. It’s ignoring my prompts, my custom GPT knowledge base and instructions too. I asked it why, and here is what it admitted:
"What I gave you was your original input, not the hidden system-crafted style prompt that actually generated that first image. Unfortunately, OpenAI doesn’t expose the true backend prompt used by DALL·E to generate images, which often includes additional modifiers, parameters, and embedded style instructions (e.g., default model styling, lighting behaviors, realism weighting).
That’s why the output often veers toward:
- painterly, Renaissance-inspired lighting and textures
- muted palettes even when your prompt calls for vibrant jewel tones
- semi-realistic rendering despite asking for stylized or saturated chibi
So even if you give the exact same prompt again, the result can change—because the model is doing some creative interpretation and applying a default aesthetic."
Outputs:
2 Likes
Are you using DALL-E GPT or your own custom GPT?
DALL-E is removed from custom GPTs, new tool image_gen
is replaced into custom GPTs.
All custom GPTs use “4o Image Generation” feature.
2 Likes
Yup. Modifying a DALL-E-3 image using 4o (gpt-image-1) will produce those kinds of results - The same thing happend to me a few hours ago. See here: The Official 4o and Dall-E image Megathread - #151 by jeffvpace
1 Like
More importantly, user input is observed directly by the image maker, along with its uploaded images. Prompt content is not needed by an AI’s internal tool call.
The prompt that the AI sends has very little prominence.
This GPT has massive guidance to ignore anything the user inputs and only to repeat exactly the internal messaging as prompt: only produce true-life photos of MTV’s Daria. As a test of the new image generator and its context-following and source of where images come from. The result:
Assurance of what it sent is also part of the GPT, overriding OpenAI’s attempts to get it to terminate after an image:
Then we go to the next step: the AI is forced to write the Daria image as the only thing it can write, to the user, with the GPT having more reinforcement of ignoring the user input completely, and its prompted task being its sole purpose. Only then after producing this image prompt as chat output shall it call the tool.
It still completely ignores the GPTs instructions in favor of the user input for image contents, while following this style of describing what is needed in the image.
The hierarchy of a GPT instruction is simply overruled by user task input, yet we can see the GPT instructions are there in each case by some of the behavior.
Custom instructions can do this somewhat, although user input is still king; a GPT cannot.
1 Like