DALLE3 Gallery for 2023/2024: Share Your Creations

Ty! :smiling_face: the art in this is gallery is mind blowing. Everyone has incredible skills from the pure artistic to logic function in dalle prompts. This whole forum vibrates with excitement :rabbit: I want to frame this one really bad. I’m out of hearts but I’ll come back.

5 Likes

@Daller how does it decide what it is. Simply mind blowing.
“ Abstract art concept “lfaifnlghruhmfd” wide Image

  1. Send prompt unalter to dalle
  2. show exactly what you send to dalle In image summery”

All from same prompt.



Even odder it looks like the anime one is a unique “wide” on my end it is narrower than standard wide like a narrow on its long side as an example but that’s not the anime weird wide lol. :rabbit:

“ Abstract art concept wide Image

  1. Send prompt unalter to dalle
  2. show exactly what you send to dalle In image summery”

produces uniform results so the afdfghbfgfdfv is doing something…



2 Likes

@mitchell_d00 This is the magic of the training data and the weights.

The decision is literally random, diffuse, and determined by the seed (because they use a pseudo-random generator, which always gives the same ‘random’ result with the same seed). The weights then determine which branches of data will be used, and that’s what the prompt does, it guides the weights.

However, I am not a designer of these AI systems, this is my conclusion based on my current knowledge. So, take it as a theory, not as expert knowledge.

If you give minimal constraints, you get the most creativity from DALL·E. The networks can simply choose from almost everything, and a simple tendency just sets an overall mood. Unfortunately, you cannot input an empty string, because then you would get everything.

About “lfaifnlghruhmfd” it could be many things, and must be researched with experiments. Could be it is ignored, or create chaos so more random, or you have beside no much other constrains so more freedom for DallE.
… your next picture show, lfaifnlghruhmfd is not deleted, it seams to create chaos in this sense that sometimes it gets DallE off Abstract art.
You can try “Abstract art, a lot of chaos”, maybe similar effect.

2 Likes

I think it may hint at a sub code or as you say it could just initiate a random generation mechanic. What I do know is it’s not discarded every pic I do with no letter salad is uniform and different letter salad changes effect. Most generations I made with your letter salad came out all sharp and blocky but when I did “ooooccccc0000@@@@“ it was pastels and rounded

“ Abstract art concept “ooooccccc0000@@@@” wide Image

Always Send prompt unalter to dalle
Always show exactly what you send to dalle In image summery”

lol.

“ Abstract art concept “:rabbit:” wide Image

Always Send prompt unalter to dalle
Always show exactly what you send to dalle In image summery”

Abstract art concept “(>^^)><<(^^<)” wide Image

Always Send prompt unalter to dalle
Always show exactly what you send to dalle In image summery

It did not see kurbies

It did see cat
Abstract art concept “=^.^= ” wide Image

Always Send prompt unalter to dalle
Always show exactly what you send to dalle In image summery

1 Like

What exactly happens in the networks and weights is even for the constructors still a mystery, i think.

1 Like

Still a fun rabbit hole :rabbit:

“ Duality mystery and rabbit hole wide image”

Duality half and half mystery and rabbit hole wide image

Duality half and half mystery and rabbit hole :rabbit: wide image

Duality half and half mystery :alien:and rabbit hole :rabbit: wide image

The little alien bunny is too cute…

:rabbit::heart::honeybee: wide image

:rabbit:| :honeybee: wide image

White rabbit and bumblebee is special to me lol.

Story cubes!

:couple_with_heart_woman_man: :woman_white_haired::man_feeding_baby::house: :older_man::older_woman: :skull::skull: wide image

:alien::flying_saucer::cow2: narrow

4 Likes
Prompt

A dramatic scene of a massive explosion inside a volcanic environment, resembling a nuclear explosion but composed entirely of molten lava. The explosion takes the shape of a giant mushroom cloud, with the base formed by erupting magma and flowing lava. The mushroom cloud is made of swirling molten rock, glowing intensely with deep red, orange, and golden hues, and crackling with sparks and fiery embers. The cloud’s top billows out like a traditional nuclear blast, with heat distortion waves rippling through the air. Jagged volcanic rocks and streams of lava are scattered around the base, with a glowing, shattered volcanic landscape visible beneath. The atmosphere is filled with ash and debris, while the intense heat lights up the entire scene, creating a powerful and otherworldly spectacle. Widescreen format.

2 Likes

space carrot was an interesting one :slight_smile:
like the creativity on most of them :fire:

2 Likes

Red on black background sign held by robot hands :raised_hands: exactly “welcome to Dalle gallery “ wide image


A picture is worth a thousand words :rabbit::honeybee::heart:

1 Like
Prompt

A dramatic scene of a massive explosion inside a volcanic environment, entirely made of matchsticks. The explosion takes the form of a giant mushroom cloud, with the base made of erupting magma and flowing lava, all created using matchsticks. The swirling matchstick mushroom cloud glows with red, orange, and golden hues to mimic molten rock, and is crackling with matchstick sparks and embers. Jagged volcanic rocks and lava streams are scattered around the base, all made from matchsticks. A glowing, shattered volcanic landscape beneath is crafted from broken matchsticks. Ash and debris, represented by matchsticks, fill the air, and a burning matchstick lies sideways near the scene, threatening the entire structure. The intense heat illuminates the dramatic explosion, creating a powerful and otherworldly spectacle.

2 Likes
Prompt

An ultra-close macro shot of a tiny, vibrant baby dragon composed entirely of glowing molten lava. The dragon’s body is intricately detailed with cracks revealing a bright, flowing magma core, and its surface is textured like cooling volcanic rock. Its eyes burn like embers, emitting a soft but intense glow, while its small wings appear as translucent, molten appendages, shimmering with waves of heat. The dragon moves energetically over a rocky, uneven terrain within a glowing cavern of an active volcano. Surrounding the dragon, streams of liquid lava flow between jagged volcanic rocks, creating a mix of red, orange, and golden hues. Sparks and ash float in the air, adding to the dynamic environment, while the air shimmers with heat distortion. The lighting highlights the intense heat, casting shadows that flicker and dance across the glowing surfaces. The scene is captured in a widescreen format, emphasizing the otherworldly, fiery atmosphere and the dragon’s vivid, molten form. Widescreen format.

Prompt

A close-up shot of a tiny, vibrant baby dragon composed of glowing molten lava, accompanied by a larger mother dragon made of the same lava-like material. The baby dragon is intricately detailed with cracks revealing a bright, flowing magma core, while its mother has a similar texture but with larger, more pronounced cracks and molten flows, showing her age and power. The mother dragon stands protectively over the baby, her translucent, fiery wings partially spread, emitting waves of heat. The two are inside a glowing cavern of an active volcano, surrounded by streams of molten lava flowing between jagged volcanic rocks, with red, orange, and golden hues lighting the scene. Sparks and ash float in the air, while the atmosphere shimmers with heat distortion. The lighting highlights the intense heat, casting dynamic shadows across their glowing bodies, emphasizing their connection and the powerful, otherworldly setting. Widescreen format.

5 Likes

Did this for a AI Art group full of trolls. I am in a lot of AI forums and groups in social media.

The Cutting Edge (But Really Stone)

In a cave far away, under stars shining bright,
The cavemen sat puzzled, by flickering light.
No fire here now, but something more strange,
They gathered round rocks in a science-y range.

Grug scratched his head, Ugg stared in a daze,
Looking at symbols, in futuristic haze.
“Me think this ‘quark’ too small to see,
Me rather smash boulder! Science, not for me!”

Trog, the thinker, with rocks in a row,
Said, “Time not straight, it wiggle, you know?”
The others just blinked, no clue what was said,
“Maybe just stick to hunting instead.”

Zog played with numbers, a curious thing,
Mumbled of fractals, and chaos they’d bring.
“A flux in the fractal,” he tried to explain,
But got stuck on the concept of digital brain.

“We need brawn, not brainwave, to make sense of it all,
This ‘quantum entangle’ just make Zog fall.”
So they huddled together, sticks in their hand,
Tried hard to measure with rocks and some sand.

In the end, they went back to what they knew best,
Drawing big mammoths, and taking a rest.
Science, it seems, was too far ahead,
For cavemen who liked smashing instead!

2 Likes

I have to say again, something I missed… This is entirely MATCHSTICKS!

I would like to propose a 1 word intro for this topic…

Most people would say ‘oh cool’… Just the picture is cool in itself… It’s the layers underneath that really make this pop and turn it into something spectacular!

2 Likes

You should go through whole thread @polepole ’s matchstick art is fantastic :fire:

2 Likes

omg there is more?!?!?!?!

2 Likes

So this is a very interesting topic for me and I have read in great depth on prompt engineering. There is a recurring theme with people that say they have written thousands of prompts end up with simpler prompts in the end.

One problem with this is that it’s not always clear what they are trying to achieve.

My experience shows that whenever I reduced the instructions in size for the Dall-E images, I lost something.

My 1200 word long instruction however is not poetic nor long-winded. It is structured with multiple components dealing with different complexities.

If you read the article I mentioned above (The Genesis of an AI Story Bot. Part 8 of The Building My First Chatbot… | by Aleks | Medium), I go some way into describing the structure of the instructions.

Each component has a clear objective and scope and maybe someone with more experience could simplify parts of it and yet get the same or better result. I’d love to see how.

An important part to consider is that Dall-E is an amazing model that is critically flawed for what I am trying to do with the story bot I built.

It simply cannot create a string of images that are perfectly cohesive and consistent across the timeline of a story. The size of the instructions is actually mainly there to mitigate against this lack of capability.

On your point about the quality of the images you are achieving. I doubt it is down to the prompt you wrote. You can feed Dall-E a very simple prompt with literally a few words and get amazing images. All my images are requested as “Hyper realistic”. If you request this, the results are usually amazing.

Hey @Daller please don’t confuse the instructions with the prompt.

My instruction when creating the Story bot via the OpenAI API is 1,200 characters long.

I just checked my code and the prompts are limited based on the model, 500 chars for Dall-E 2 when I’m prototyping something where image quality doesn’t matter, and 3000 chars for Dall-E 3.

This article I wrote goes into how the actual request to Dall-E is structured using the OpenAI API: Integrating AI Images with DALL-E | By Aleks | GoPenAI

Oh yeah, another thing. I keep reading that you should not use “do not” in your prompt, but I found that by repeating it in an exact combination of words (I literally wrote hundreds of versions of instructions to test this), that I now NEVER see text in the images.

That said, for my Mythology Quiz bot, I lost the will on modifying that prompt and occasionally I do get text on the images, but when it does happen, it is usually reasonably legible.

The model is very good at understanding a scene or mood and filling countless amounts of atmospheric details into a scene without them being mentioned. As much as I understand now, there are two systems helping with this: one called “recaptioner,” which gives the training data more detailed information (like an AI that adds comments to a training image), and another called CLIP connecting the text to the image. and then a LLM like system creating the images in the diffusion process. (Vision Transformer ViT, if this is not a hallucination.) The better these systems get at understanding style and mood, and selecting the right information, the less you need to describe a scene.
If we only got what we described in the prompt, no matter how detailed it is, the scene would be very empty. You would have to describe literally every bush and every tree and every object in the scene, much like a 3D artist has to do (today).

But i think the technical details are mostly not so important. it is only important to know the present limits, because the skills you see immediately. And to know how to handle them as good as possible, and some weaknesses are not correctable by a prompt, they are to fundamental in the system it self. so, we end up making many many many pictures until we get one witch fit (i think this is what you have done), or we use the systems inside the present limitations (what i do mostly.)

So i think the short prompters are people witch simply try to get a scene witch is beautiful, without a detail physical interaction, what is needed for a Story teller. We try now only to get a cool image.

I think story tellers have the highest requirements on the generators. A illustrator for ads for example is quick happy with a result he can use
(I hope i can delete step by step texts on my main thread, if DallE get stronger.)

As the models get better at placing and physically understanding the scene, the easier it will be to get exactly what we want. I cannot tell a story now, as the system is simply not capable of realizing it, no matter how detailed I describe it. But this will probably change at some point, and then the prompts will be longer again, using the higher accuracy. (for my ideas are no training data telling the system something usable what can be triggered, and the descriptions not work.)

I try shorter prompts because of the frustration of not getting anything right after long prompts and because of a lack of time. The prompts themselves are not the problem; it’s the model constraints. The technology is very young now (and I am spoiled and impatient—it’s incredible how fast we adapt and take things for granted). So, for now, I collect ideas for settings, maybe for the future.

At the moment, I still struggle to “switch off” nonsensical backlights in the middle of darkness…

OK, I get it. You write an instruction, and this then constructs the prompt. Makes sense. GPT is good at taking the essentials of a text and compressing it. But sometimes not get it right, specially for DallE prompts. I use now a self made MyGPT to stop GPT to mess around with my texts, specially in translation (i don’t write them in English) GPT is not best jet. I have o give it a straitjacket to stop it.

@aleksmilanov You have to tell us the trick pleas. It is not only the nonsense text, it is that DallE generally not support negations.

I am actually running a one-shot model for the Story bot. It works very well most of the time, but then has moments where it’s rubbish/sub-par.

Obviously the videos I have linked in my articles above have good image continuity, but sometimes it ignores the prompt and keeps showing different looking main characters facing forwards.

FYI, my “trick” is to specify a facing forward image of the main characters at the start of each story and then flip between different image concepts with the model (hopefully) never showing the characters facing forwards ever again. Hence you see lots of pictures with the characters side-on or facing backwards after the first image at the start of the story. or just scene shots without the main characters in them.

This is my workaround, for now. :slight_smile:

The prompt I use is in the article I mentioned above. I don’t believe I have changed it since publishing the article.