I applied for DALL·E 2 early access. Wish me luck.
When I said my profession was “other,” they asked if I was a robot!
I answered truthfully! Small smile. Bleep. Bloop!
Some of my Disco Diffusion stuff…
I applied for DALL·E 2 early access. Wish me luck.
When I said my profession was “other,” they asked if I was a robot!
I answered truthfully! Small smile. Bleep. Bloop!
Some of my Disco Diffusion stuff…
I am curious how long it will take to get. I have previously built a bottom-up biomemetix model of image processing in the cerebral cortex to try to understand how different layers contribute to image recognition, and it will be really interesting to see how a top- down generator works in practice.
I saw that maybe it’ll be commercially available in summer, so research is gonna be trickled out in the next couple months maybe?
OpenAI DALL-E 2 Main Page:
Join the waitlist here: DALL·E
View Research:
DALL-E 2’s Instagram:
https://www.instagram.com/openaidalle
On September 11, 1841, a tube for oil paints was patented.
Renoir said, “Without the tubes there would be no Impressionism”.
On April 6, 2022, DALL·E 2 appeared!
“Without Open.AI there would be no AI art”
We cannot know what the future holds, but we are given the gift of infinite possibilities.
Open.AI, you have given us the ability to appreciate these possibilities. Thank you.
Thank you for giving us the power
“We will figure out how to get through the DALL•E waitlist quickly–very excited to see what people create” Sam Altman
https://twitter.com/sama/status/1513289081857314819
According to Alfred Hitchcock, a film should start with an earthquake, and then the tension is to keep on rising incessantly. FINGERS CROSSED OpenAI. Thank you, You guys did some genuinely brilliant work
This thread’s subject line just gave me the idea that DALL-E 2 could essentially be used to analyze captcha images and then manipulate the images to be OCR-friendly for cases where OCR fails to produce the correct captcha. I mean that would be a genuine use-case but is seen as a way for people to circumvent anti-bot validation systems.
Aside from that random thought, it would be cool to see DALL-E 2 produce text descriptions from images in the same way it produces images from text descriptions. Would having that feature be considered popular enough to implement in the future? I’m not too knowledgeable into how contrastive models such as CLIP operate to know if there’s a simple way to just reverse the input & output to make that idea a reality, but figured I’d put it out there in hopes someone who is knowledgeable in that area can explain the variables that would be at play here to see how easy that would be to implement in future iterations of DALL-E.
You can send my 0.00001% for the idea generation to Crazy Arnie’s in Metaverse-52-a Quadrant 23-c!
In all seriousness, it’s an interesting thought. The reverse (image to text) also…