What is the difference between the prompt processing behind DALL-E 3 API and DALL-E 3 with ChatGPT?

I’ve worked very hard on prompt engineering to fix the style of images.
Looking at the system prompts and rules of DALL-E 3, I created a special long prompt that complies with them and generated images based on it. I was thrilled to finally succeed!

However, when I send the same prompt to the DALL-E 3 API, the style gets completely broken for some reason.
I’ve spent about $64 on the DALL-E 3 API alone, and as far as I can tell, the prompt completion for the DALL-E 3 API seems to perform poorly.

I have no idea if this is due to different image generation models being used or different prompt processing systems.
(Is it a difference between GPT-4 and GPT-4-Turbo?)
The differences in the prompt processing behind the scenes are too significant, and I am very dissatisfied.

6 Likes

[Supplement]
I’ve used proper specifications such as “anime style with delicate lines” and “comic style” without using any artist or studio names for both.

I’m encountering a very similar issue. Sending the exact same prompt to DALLE 3 in the ChatGPT app produces a very nice result, but sending the prompt to the DALLE 3 API returns worse images with unwanted artifacts (e.g. text). I compared revised_prompts from both and cannot find any notable differences, so I’m wondering if a different image generation model is used? Or possibly a new worse model was rolled out to the API and ChatGPT app at different times?

3 Likes

@owencmoore
Do you have a plan to resolve this?

Same experience it almost feels like the api is pointing to Dalle-2, not Dalle-3. Same exact prompts and the difference is night and day. Even after 30+ tests.

1 Like

User and Dev they have different ways to solve problems. An API might be better but it comes at the cost of being difficult for the user to use, so the user may have to find a workaround that works their way. Me too, I have a problem when creating images. And lacking regular practice so I asked myself if there was a solution, so I created 2 MyGPTs. One can be an artistic description of an exemplary image. The other one will be set to be created strictly as specified.


And now in this forum has a discussion about how to control it to achieve the desired results.

have not tried yet, but how about doing a GPT4 API call to actually generate the (detailed) DALLE3 prompt based on your description, and then sending this one to DALLE3?

come ot this topic.

Sending a detailled, GPT4-generated prompt to the API and Apps still yield different quality results.

1 Like

Dalle-3 in Chatgpt takes into context other stuff you have generated before to create better results, at least that has it been doing for a while now, sometimes my prompts even refer to other images or previous images so you are not seeing the full context of what it generates in the prompt you can copy.

Might be related to what I just found: DALLE3 API poor quality even considering prompt revision - #4 by 1004wsh