OpenAI is making a huge mistake by deprecating DALL-E-3

OpenAI is scheduled to deprecate DALL-E-3 on 05/02/2026:

On November 14th, 2025, we notified developers using DALL·E model snapshots of their deprecation and removal from the API on May 12, 2026.

Shutdown date Model / system Recommended replacement
2026-05-12 dall-e-2 gpt-image-1 or gpt-image-1-mini
2026-05-12 dall-e-3 gpt-image-1 or gpt-image-1-mini

Here is the problem: While gpt-image-1 is a great model, it cannot do what DALL-E-3 does. Example:

A simple prompt:

It is daytime. The sun is very bright, the sky is blue, and white “pillow” clouds are floating high in the sky. A beautiful, colorful and majestic dragon is purched on a large bolder looking down into a lush green valley surrounded by a tall pine forrest.

DALL-E-3

GPT-Image-1 (same prompt)

Which one do you like better?

Don’t get me wrong: GPT-Image-1 can create some truly amazing images and image edits. I’ve done it myself as well as others (better than me). But prompting GPT-Image-1 requires a “skill set” that is difficult at best. Maybe this is what OpenAI wants - a token eater.

Maybe OpenAI can modify the GPT-Image-1 API to have a dall-e-3 mode.

All I can say is that I’m done with burning tokens with GPT-Image-1 just to get an acceptable image.

2 Likes

DALL-E 3:

“accurate forensic recreation” prompt by GPT-5.1 (that dares not even state a gender). Image by the product today after the wait over a minute makes you doubt if anything will ever come.

Plus, that DALL-E 3 can simply make the undescribed be fully-imagined and arcane.

However, either is squarely in the “fascinating, but for what purpose?” territory, after everybody has stunning mediocrity just a few words away.

3 Likes

Two of my DALL-E-3 images are now under consideration for Album Art for a known artist.

I wish I could talk about it, but I’m now under an NDA… :rofl: :rofl: :rofl:

2 Likes

Would like to add my support for keeping dall-e-3 model around. Its nice to have both for different purposes.

1 Like

I’m really disappointed by how this was handled.

There was no clear announcement for regular users that DALL·E 3 would be shut down months before the official deprecation date (May 2026). It simply disappeared — with no warning inside ChatGPT, and no way to access it again.

Many creators relied on this model in their workflows. GPT-4o / Image 1.5 is not a valid replacement. It introduces visible artifacts and structural issues that make it unusable for professional image work.

If a model is officially supported until May 2026, then users should still be able to use it during that time. Disabling it early — without a proper transition or visible notice — is frustrating and feels like a breach of trust.

1 Like

Works for me:

However, I use the API - I don’t use ChatGPT.

Been experimenting with Gpt-Image-1.5 and it’s a great model. But I’ll sorely miss DALL-E-3.

It introduces visible artifacts and structural issues that make it unusable for professional image work.

I disagree.

1 Like

Hello and thank you for your response!

I just want to clarify that the model currently available in that section is DALL·E-2, not DALL·E-3.
I have worked with DALL·E-2 for several months and am very familiar with its specific characteristics — by default, it always generates two images at once, which is a hallmark of DALL·E-2. DALL·E-3, on the other hand, always generated just one image by default.

In addition, DALL·E-2 produces noticeably smaller files, while DALL·E-3 delivered a completely different quality of detail, texture, and lighting.

I’m specifically hoping for DALL·E-3 to be brought back, as it was much more useful for my work with detail and texture.

Once again, thank you for your help and your quick response!

Hello and thanks for sharing!

Your style is completely different — fine lines, an illustrative feel, and lots of characters.
I’m trying to achieve a more painterly effect, with strong lighting and volume, but unfortunately with this model I keep running into issues with texture and detail.

Best regards!

I also think it is a mistake to deprecate DALL·E 3! With the new update of the GPT-Image image generator, I can no longer create breathtaking, ultra-fine detail fine art paintings! Since the new update, creating images with high artistic value has been impossible for me. I know that my needs are very individual, but there should be a possibility to keep creating images with high artistic value! Here are such images that can no longer be created:

1 Like

With all due respect, are you saying that image was created by DALL-E-3?

A 1920x1280 image was never an output of DALL-E.
If resized to 1024px height, it becomes 1536x1024, which might be an image by gpt-image-1, as also indicated by the overwhelming sepia tone.

If sent through gpt-image-1.5, with “create: breathtaking, ultra-fine detail fine art painting with natural color balance.” as you desire (along with composition of input for more outfill), then tag on, “Improve the quality: no plastic people”:

Still no DALL-E 3, but the fault of DALL-E was indeed that it produced action figures, mannequins, and dolls instead of believable faces.

1 Like

I can no longer create breathtaking, ultra-fine detail fine art paintings! Since the new update, creating images with high artistic value has been impossible for me.

Yesterday, I created this with gpt-image-1.5:

Looks like a Monet?

Maybe I’m wrong! This image was created in ChatGPT about 20-30 days ago. But for about 4-5 days I can’t create such images in ChatGPT anymore! The quality is not the same. When the image itself is enlarged, strange details are visible, resembling brushes and a canvas for painting. I thought I was using DALL-E-3 in ChatGPT all the time. But maybe I was using gpt-image-1! I don’t know. But since there was talk that the image generator was updated, maybe it’s gpt-image-1.5, I see strange artifacts on the image itself. Here’s an example:

This topic is in the “API” category, where developers that use pay-per-use AI models in their applications are given notification of a particular model version being deprecated, with an ultimate shut-off date.

ChatGPT: you consume what they give you on any given day, with little choice. Perhaps you even get “tested” to see if you’ll downvote an experimental image maker. The DALL-E GPT linked above even seems “upgraded”, but reporting “previous model” still after it makes an image, so who knows what’s coming out of that… (plus they didn’t think too far ahead when they made a whole category of GPTs in ChatGPT called “DALL-E”)

Hi, and thanks for sharing your image.

Just to clarify — I actually do enjoy painterly styles myself. I often work in aesthetics inspired by classical painting, with strong lighting, texture, and depth. But I think there’s a deeper technical point being overlooked in this discussion.

My concerns are not about whether an image “looks nice” or resembles a certain style. What matters is the structure of detail, the credibility of surface, and the behavior of light — all crucial when working with high-quality images.

For example, the impressionist-style image you posted relies more on a uniform noise layer than on any actual brushwork structure. The texture is applied globally and randomly, without direction. In real impressionism, each stroke has motion, depth, and temperature — this is more like a blur filter with added grain.

In this context, while DALL·E‑3 obviously wasn’t producing actual paintings in the traditional sense, it managed to approximate the structure and logic of painting — with layered surfaces, coherent lighting, and material awareness.
The images weren’t “painted,” but they behaved as if the model understood how paint interacts with form and light. GPT-Image 1.5 doesn’t do that — it simply applies a visual style that falls apart when examined closely.

GPT-Image 1.5 also tends to flatten local contrast, introduce repeating noise artifacts, and lose microtexture on edges and materials. That makes it very difficult to achieve images with real depth, surface realism, or believable light–material interaction.

When I say the model is unusable for me, I don’t mean it’s “bad” in general — it simply doesn’t meet the quality requirements needed for more advanced work with images where texture, layering, and material rendering truly matter.

I tested GPT-Image-1.5 with a prompt structured specifically for directional brushwork, layered impasto, believable light–material interaction, and non-uniform texture.

Here’s the result:

Does this get closer to what you mean?

I intentionally emphasized:
• stroke direction variation
• layer depth in snow and water
• distinct microtexture per material
• light diffusion vs reflection
• non-procedural noise
• surface credibility (wood grain, snow buildup, water sheen)

If this still misses your target, could you clarify which part collapses under closer inspection?

1 Like

Well, the bottom line is that DALL-E-3 can do what GPT-Image 1.5 cannot do and GPT-Image 1.5 can do what DALL-E-3 cannot do. Therefore, it would be great to keep the DALL-E-3 API.

In this context, while DALL·E‑3 obviously wasn’t producing actual paintings in the traditional sense , it managed to approximate the structure and logic of painting — with layered surfaces, coherent lighting, and material awareness.
The images weren’t “painted,” but they behaved as if the model understood how paint interacts with form and light .

Can you share a DALL-E-3 image that demonstrates the above?

I think that you are not giving GPT-Image 1.5 a fair shot. You can do amazing things with it.

I don’t know if it’s good or bad…

this looks like something dalle could have done but it came from the 1.5?

2 Likes

Very Nice. Cudos… no way DALL-E-3 could even come close to that.

Here is an example of my point:

Prompt: Image of a beautful, colorful peacock traveling through a winding wormhole.

DALL-E-3

GPT-Image 1.5 (same prompt)

Big difference But, DALL-E-3 has always been only available via the API which has limited it’s use and popularity. However, GPT-Image 1.5 is so versatile that I don’t think DALL-E-3 will be missed that much.

3 Likes