The title says it all. It truly is a step backwards. I am seeing disproportionate figures with short arms, large heads, oversized hands and more. Lack of atmospheric depth. Some areas appear hyper-realistic while others break apart completely all within the same image. Its really unfortunate and a massive step backward for projects I’ve been working on. I was hoping that I can paste previously generated images as style references so that it could replicate their direction but that has repeatedly failed. It’s pretty disappointing.
Can we have the option to use the old version of DallE? This is completely derailing a project that I have been working on for over a year and I am sure that I am not alone. This new image generation model is simply not at all as good as the last. It’s flat, feels like posed stock imagery. Please give people the option to use the previous version.
You can use it separately as a custom GPT of OpenAI, but if you select any model from model selector, it won’t work in ChatGPT like GPT-4o, o3, o4-mini, …etc.
DALL·E GPT is a special case, you can’t access DALL·E image generation features in other custom GPTs, because the dall-e
tool has been removed and replaced by a new tool called image_gen
.
Thanks. It is good to have this, but it would still be better to have both models available within custom GPTs. Without access to content/documentation within a custom GPT, ChatGPT will not be able to create images based on established content, and since the visual style is, for some, a significant step back, this is very unfortunate and still a really significant blow. Providing access to users to both models would be ideal. I would even pay an increased monthly fee to have access to both models. The new version, at least right now, is worse than DallE
Hopefully I can find a way to make the new tool work well. That or find some way to make Dall-E work well without access to my GPT through experimenting with uploaded images or something, to establish visual direction.
Yes, It would be better if both models could be used in custom GPTs. But right now, custom GPTs only have the “4o Image Generation” option.
Also, when you give a custom GPT a task that uses an image, it stops after making the image and doesn’t move on to the next part, unless your instructions are really clear. That’s because the image_gen
tool says:
After each image generation, do not mention anything related to download. Do not summarize the image. Do not ask followup questions. Do not say ANYTHING after you generate an image.
Perhaps one day they will allow it but thank you so much for linking the DallE GPT here. I uploaded two key documents from my custom GPT and prompted it with an image request and it nailed it. Very VERY … V…E…R…R…R…Y relieved. I feel like my project (which has been underway for more than a year) is now saved because of your response. Thank you.