Prompt to make same image using API that I made in ChatGPT / DALL-E API doesnt understand the meaning of 1-Bit

This is the image I want to use in my game. As a game designer and somebody who grew up playing 1-Bit games this is exactly the type of image I would expect.

The prompt was simply “generate green 1-bit opera house full moon low detail image”. Boom!

When I use the API I get something like. It is terrible. It is almost as though DALLE has stopped understanding what 1-Bit is.

I feel like I should be able to take the seed ID from the image I created in ChatGPT and use that for reference in my API call.

Anybody here agree? Disagree? @PaulBellow any thoughts?

1 Like

Seeds have been discontinued sadly.

1 Like

Take a look in ChatGPT Plus at the revised prompt… send that to the API. :wink:

If you click on the image in ChatGPT then hit the “i” you can see the prompt it used. Likely, the rewrites aren’t matching, but if you send the ChatGPT one, it won’t be exactly the same, but quality should be better as it has more details… that you know work well…


Makes no sense at all.

Until it does… surely there’s somebody from OpenAI that has said “don’t fret, we’ve done it because we have a better solution coming out soon…” eh eh :thinking:

It actually does make sense. Due to the fact that DALL-E 3 is distributed across hundreds of databases, they would have to store seeds in the cloud to be able to use them between each database which could cost a lot of money for storage as thousands of images are generated daily.

Thanks Paul. The first regeneration made it so much worse BUT the second time I tried it, we got the booby prize:

  • it was slightly better :rofl:
  • but it wasnt 1-Bit :man_shrugging:
  • so still totally unusable :sob:

So close yet so far away.

1 Like

What’s the prompt?

I might be able to tinker a bit… DALLE likes to add styles if you don’t specify…

Lemme know, and I’ll see what I can do…

I understand.

It makes no sense creatively.

I’m not here to argue WHY OpenAI dumbed it down. I’m here to fight for creative tools that offer developers continuity and congruent results.

OpenAI made OpenAI so I’m fairly certain they can come up with a creative solution that isnt just kaibosched because the guy with the credit card says “Azure are a rip off”.

That might make sense for Bob’s startup, but I’m not buying that for a billion dollar company that want to make cool shit like text to video which is going to cost a pretty penny fo sure.

Either way, a creative solution for image continuity would be a game changer for me and many others here.

A minimalist 1-bit illustration featuring a green opera house under a full moon, with a low level of detail. The image should emphasize simplicity and elegance, using only two shades of green to create contrast between the opera house and the night sky, capturing the iconic silhouette of the building and the luminous moon above it.

How’s this?

Create a minimalist 1-bit style illustration using only two contrasting shades of green to depict an iconic opera house under a full moon in a night sky. The image should capture the essence of simplicity and elegance, focusing on the silhouette of the opera house and the luminous moon. Emphasize the contrast between the building and the sky, ensuring the artwork is recognizable and impactful with a very low level of detail. The goal is to convey the iconic scene with stark simplicity, using the 1-bit digital art technique that employs only two colors to create a compelling visual.

To change the scene, try to keep the other details the same but swap out the “subject”… play with it some! Might try terms like “half-tone” and “woodcut” too…

Did you check out the DALLE AMA posted here in the forum? They’re working on it! I suspect they’ve been heads-down quiet since last year working on next iteration… maybe a DALLE3exp like we had last summer for DALLE2exp?

1 Like

Thanks so much having a play with the prompts! It’s definitely getting it closer and helping me understand how to get better results :slight_smile:

I didnt see the DALLE AMA. Is that something I should search for or is there a specific link to read :thinking:

1 Like

Rewritten, and I get offered a version of version of Dall-E under test that is strikingly different, signaled by the two images instead of one.

At a certain point of more prompt saying “instead of black which is unavailable, the darkest color will be the dark green background. There will be no fading to the light color, only a stark transitional line, as only the two colors with distinct and unique values can be employed…” you’re just going to have to run lots of iterations to see if anything rare pops out. Or find images that look good after going to 2 color in your own photo tool.


Here ya go!

Thanks, @_j … insightful as usual…


Thanks for the link @PaulBellow … very informative!

Nice to meet you @_j and thanks for taking the time to have a play around.

I imagine this will just get better and better with time. It’s always annoying when things have to get worse before they get better but such it life!

@PaulBellow so does that mean our API prompts get re-wrote as well? It’d be swell I could turn that off as a dev.

1 Like

Yea, they do. You can grab the revised_prompt to see what it’s changed to. Usually if you provide enough details, it won’t fill in its own…usually! :wink:

I had a similar issue where a Custom GPT made on a regular user console. Outputs different that those using the API. So I tried using a browser mode to be pulling from the custom GPT but it fails even with puppeteer or automator Apple script targeting the custom GPT. I can launch enter a prompt, even make the send button active but couldn’t cross the finish line. I think Open AI should allow custom GPTs to be accessed by API. If I’m missing how to please link. I created a 1-bit Custom GPT “retro pixelator” ChatGPT - Retro Pixelator there are hit and misses but I instructed the GPT to ask if the style is liked by the user, if so it copies the prompt and Gen_ID and (seed though no longer used) to the next image.


That picture is actually a two-bit image though. (In practice, there’s a lot more actual colors, but you could get that effect with black, white, dark green, light green.)
So I’m not sure that the first option knew what 1-bit was, either!

If OpenAI uses a variety of hosting platforms, then a consistent seed output cannot be guaranteed, because some iterated math run on, say, an A100 GPU, won’t necessarily be bit accurate to the same math run on a H100. It gets worse when you switch host CPUs (AMD vs Intel vs Graviton Vs …) and maybe even inference engines (Tensor cores, Groq LPUs, etc)

Also, if they iterate on the model to improve performance (even if they start with the same parameter set) it will generate different output. And because of the significant amount of feedback in diffusion models, even a single bit off in a single pixel early on, can cascade across the entire image in the end.

This is awesome. Thanks for sharing with me.

Good observation. I think in part it might be because openAI hijack simple instructions are wrap them up with what they think is best.

While I can see why this could be useful for people that arent very good at writing prompts, I think this decision is terrible for experts who know what they want and know how to describe it.

I understand. BUT, this isnt important to the use case. Consistent images just needs to be solved so developers can achieve continuity via the API.

Similarly I don’t care really how ones/zeros can shoot through fibre obtic cables (or even copper cables), and then get put back together as glorious pixels on my computer only to land gently and pleasingly into my retinas.

The problem around consistent images needs to be solved.

Midjourney has NAILED it so despite what you’re saying. Somebody else has already solved it. If only Midjourney had an api :thinking:

Yeah, a Midjourney API would be very nice!

I find Midjourney “character reference” to be … mid … at best, though. Their seed works, but only within the same model version. And, more importantly, this means that they can never take some model version (say, Niji 4) and host it on some other/newer hardware than what it was originally hosted on. Good for them for as long as they actually keep that up, but, long term, don’t expect that to be repeatable forever. It feels to me as if OpenAI is taking the more cautious approach of not promising something they know they will have to eventually break.