Experiencing Different Image Quality in DALL-E 3 via ChatGPT vs. Direct API

Hello everyone,

I’ve been experimenting with DALL-E 3 for image generation and have noticed a significant difference in image quality when using it through ChatGPT compared to directly via the API. When I input the same, somewhat complex prompts directly into the API, the generated images often lack detail and are overly simplistic, sometimes even abstract and unrelated to the prompts. However, using the same prompts through ChatGPT yields fantastic results.

Attached are images generated from both ChatGPT and the API for comparison.

I’m quite new to this, so this might be a naive question, but I would greatly appreciate any help or insights. Here is the part of the code I’m using:

   const response = await openai.images.generate({
      prompt,
      n: parseInt(amount, 10),
      size: resolution,
      
    });

Thank you in advance for any assistance or guidance!

1 Like


here is the chatgpt image


same prompt with the API

The difference is:

ChatGPT has instructions of how to rewrite prompts that appear early in the conversation history. Then you continue having long conversations that alter how substantive those system message instructions are. GPT-4 is always used, as only when it is selected in ChatGPT can you generate images.

The API has its own AI that only rewrites prompts. The idea is presented as “we know what will make better images”, but there’s also lots of “we’ll change undesired stuff”. The instructions should be comparable, but are even more hidden. We don’t know what AI model OpenAI thought was up to the rewriting task in this case.

The last image almost seems like the AI failed. Invoked the DALL-E 3 function, but then produced some garbage that produced some garbage. DALL-E 3 usually makes something much more creative from even just random text, though.

I just generated minutes ago and DALL-E 3 rewrote and produced a cartoony image out of mostly irrelevant language. So API is working.

The forbidden knowledge

How to stop DALL-E from changing prompt? - #3 by _j

You can see what actual prompt got sent to the model by looking at the revised_prompt value that gets returned. What you set as the prompt in the request gets interpreted and re-written just like ChatGPT does.

Also there are two additional parameters that might be relevant,
quality and style. Quality can be either standard (default, cheaper) or hd (costs more). For Style you can do either vivid (default) or natural.

I haven’t actually noticed a massive difference in quality between standard and HD, but you might depending on what prompts you’re using. It’s possible chatGPT always uses HD, not sure.

See the parameters described here: https://help.openai.com/en/articles/8555480-dall-e-3-api

Further thoughts, since you mention some of the results can be abstract and nothing related to the prompt, I wonder if the prompts are even being sent correctly.

Are you sure that this line:
prompt,
isn’t supposed to be something like
prompt: prompt,

1 Like

I belive you can only request 1 image at the time with dall-e-3 (or making parallel requests which I don’t know how to do). Did you really use the dall-e-3 model? You should just rendered in an error with this promt.

+1, I also believe it is very clear that the DALL-E 3 API and the DALL-E 3 under ChatGPT are different in quality. With ChatGPT, I’ve been able to use the exact prompt that I used feeding the API and I noticed that only ChatGPT is able to maintain consistency if I force it to stick to my prompt. At this point I feel like I’ve been testing for long enough that this is consistently true.

What would be the reason for this? Would a seed parameter solve the problem and why not?