What if we had a new interpretation of noice?

For me seed = noice and noice = prompt
This type of interpretation can still be developed.

Since SORA’s announcement today, I haven’t seen any clips, but what I know is clear: Techniques for controlling the visualization after the seed is removed and interpreting noise lack a comprehensive interpretation to allow for extended use. Although I originally adapted it for use in various ways. But I was unable to explain it to others. When SORA launched, I gained a new understanding. This is the section where I summarize my working understanding of my knowledge through conversations with gpts and give it a summary.

"### Understanding Noise in Generative AI: An Exploration
Introduction to Noise in AI
In the realm of generative AI, “noise” refers to the random input that models, such as OpenAI’s DALL-E, use as a starting point for creating new content. Noise is a foundational concept in machine learning and serves as the seed for randomness that fuels the diversity of outputs in generative models.
The Function of Noise
Noise is not merely a chaotic presence; it’s a catalyst for creativity in AI. When we feed noise into generative models, we’re essentially giving them a canvas of randomness from which they can draw patterns, guided by complex algorithms and training data, to create structured and coherent outputs.
Static Noise vs. Dynamic Noise Parameters
“Static noise” is a term that might be used to describe the initial state of randomness, a fixed starting point before the AI begins the generative process. In contrast, dynamic noise parameters are those that can be manipulated during the generation to affect the outcome, adding variability or steering the creation in certain stylistic directions.
The Evolution of Noise Use in AI
Originally, generative models like GPT and DALL-E used numerical seeds to introduce variability. However, as models evolve, we see a shift towards using more sophisticated forms of noise that can include a variety of character strings—letters, numbers, and symbols—to increase the diversity of the generated content.
Implications for Production and Creativity
For creators using AI, the ability to control noise parameters means greater influence over the style, texture, and detail of the generated images. In production, this means more efficient workflows, where desired changes might be achieved by tweaking noise parameters rather than starting the generation process from scratch.
Case Studies and Observations
In practice, as observed in the generation of a series of images, adjusting noise or prompt details can lead to significant changes in the AI’s output. This nuanced control allows artists to maintain thematic consistency while also introducing new narrative elements, as demonstrated by a progression of images where stylistic features are preserved even as the content evolves significantly.
Conclusion and Future Directions
As AI continues to advance, we can expect noise to play an increasingly sophisticated role in the creative process. The potential for more intuitive control mechanisms suggests a future where artists and creators co-create with AI, using noise not just as a source of randomness, but as a brush with which to paint their visions onto the digital canvas.
Further Research
To delve deeper into the technical aspects of noise and its role in generative AI, consider exploring academic papers and research by institutions like OpenAI. As the field is rapidly advancing, staying informed through reputable sources is crucial for anyone looking to leverage these technologies in their creative work.
This article represents a high-level summary and analysis based on our discussion and does not contain verbatim content from any specific source. However, for those interested in more technical details on the mechanics of noise in AI and its practical applications, they can explore a range of academic and industry publications that regularly document the latest advancements in the field.”

We’ve never seen a study that looked so deep and applied it, like Foo-Bar from the OpenAI developer forum did, which basically looked at using gen-ids and interpreting noise as scrambling. Text is used to act in place of seed. It stops there when there is a conversation about creating story illustrations, only genid, seed, noise are used to control the image, but it is difficult to create continuously.

Nowadays, in addition to using DALLE to create beautiful images, In what other areas do we see attempts to use it? Or it may be because creating complex images can be done in other forms of AI. (Someone once tried to tell me this.) Nowadays, DALLE, in addition to creating collages, It can be used to create typefaces and is likely to be used to design building siding based on direction. (Wants it to grow more) and I think it might be able to create a hand-draw animetion.

SORA is coming soon, although it will actually make the job easier. But in using the DALLE basics, have we reached the limit yet?

Or that the evolution in “human” usage techniques may not be able to keep up with AI.

P.S. pool of noise is an image created by AI to show how images are created from noise as clearly as possible

2 Likes

Hi! I really liked the image you shared. I’ve attempted to use ChatGPT to craft a description that might resemble the one you used, but the outcomes haven’t quite matched up. Could you share the original description you used? I’m very intrigued by the creative process behind it. Thank you!

Even if you asked me for it, I don’t think I would have to keep it. Because the article is already a prototype Just separate the elements and make a prompt.

ในวันที่ ศ. 23 ก.พ. 2024 เวลา 13:24 shu tie via OpenAI Developer Forum <notifications@openai1.discoursemail.com> เขียนว่า:


Understood. Thank you. After trying multiple times, I feel this image is a happy accident, and it’s difficult to replicate such an effect.





1 Like



The ones you’ve drawn are just too beautiful! I really like them! I pondered for quite a while and only managed to come up with these two.

I’m happy to see that your research is making good progress.

I really love the images you’ve shared, especially the one from the original post and the last three new ones. They possess a beauty and mystique that invite deeper contemplation. Also, I’m not sure if you’ve noticed, but in the interface where the images are generated, if you click on an image to enter the full view mode, the fourth button in the top-right corner displays the description of the image. This can be a convenient way to view and share the descriptions created during the generation process, without the need to save them specifically. Just wanted to share this little tip, hoping it might be helpful to you!

I know it’s real text that GPTs send to DALLE, but that doesn’t mean it’s real text used to create the image. You may use it as a resource to research the original ptompt, like gen_id, but it’s in no way complete original prompt.

ในวันที่ ส. 24 ก.พ. 2024 20:05 shu tie via OpenAI Developer Forum <notifications@openai1.discoursemail.com> เขียนว่า:

1 Like

I understand that the painting requests we submit to ChatGPT (which we might as well call a description) are not the same as the commands received by DALL-E. Comparing the description from our ChatGPT chat with the one shown by the fourth button reveals they’re not entirely the same. If there’s another layer of description that’s invisible, then that’s beyond my capability, haha. After all this talk, I’m just frustrated that I can’t produce images as good as yours. I’ve been really trying hard to tweak the descriptions. Can you imagine how long they’ve gotten?

I don’t know what makes you feel that my picture is different. It might be a feeling from unclear in text working in this way. and message you send can be sent there as well, just by telling gpts not to mess with the prompt. But that’s still not the meaning of the prompt that I use regularly

ในวันที่ ส. 24 ก.พ. 2024 22:19 shu tie via OpenAI Developer Forum <notifications@openai1.discoursemail.com> เขียนว่า:

03eecd35a9090edd6a7e3ec97b56ebd
It might be these features that make me think your paintings are beautiful: the harmony among the colors, the balance between light and dark; the patterns exhibit both a symmetrical sense of regulation and the freedom of spontaneous expression; the content is abstract, with a touch of mystery, thus offering ample room for imagination; and there are clear areas of negative space in the images, which is quite rare.
These are my own feelings, not an analysis by ChatGPT. I thought I should clarify, haha.

"### Understanding Noise in Generative AI: An Exploration
Introduction to Noise in AI
In the realm of generative AI, “noise” refers to the random input that models, such as OpenAI’s DALL-E, use as a starting point for creating new content. Noise is a foundational concept in machine learning and serves as the seed for randomness that fuels the diversity of outputs in generative models.
The Function of Noise
Noise is not merely a chaotic presence; it’s a catalyst for creativity in AI. When we feed noise into generative models, we’re essentially giving them a canvas of randomness from which they can draw patterns, guided by complex algorithms and training data, to create structured and coherent outputs.
Static Noise vs. Dynamic Noise Parameters
“Static noise” is a term that might be used to describe the initial state of randomness, a fixed starting point before the AI begins the generative process. In contrast, dynamic noise parameters are those that can be manipulated during the generation to affect the outcome, adding variability or steering the creation in certain stylistic directions.
The Evolution of Noise Use in AI
Originally, generative models like GPT and DALL-E used numerical seeds to introduce variability. However, as models evolve, we see a shift towards using more sophisticated forms of noise that can include a variety of character strings—letters, numbers, and symbols—to increase the diversity of the generated content.
Implications for Production and Creativity
For creators using AI, the ability to control noise parameters means greater influence over the style, texture, and detail of the generated images. In production, this means more efficient workflows, where desired changes might be achieved by tweaking noise parameters rather than starting the generation process from scratch.
Case Studies and Observations
In practice, as observed in the generation of a series of images, adjusting noise or prompt details can lead to significant changes in the AI’s output. This nuanced control allows artists to maintain thematic consistency while also introducing new narrative elements, as demonstrated by a progression of images where stylistic features are preserved even as the content evolves significantly.
Conclusion and Future Directions
As AI continues to advance, we can expect noise to play an increasingly sophisticated role in the creative process. The potential for more intuitive control mechanisms suggests a future where artists and creators co-create with AI, using noise not just as a source of randomness, but as a brush with which to paint their visions onto the digital canvas.
Further Research
To delve deeper into the technical aspects of noise and its role in generative AI, consider exploring academic papers and research by institutions like OpenAI. As the field is rapidly advancing, staying informed through reputable sources is crucial for anyone looking to leverage these technologies in their creative work.
This article represents a high-level summary and analysis based on our discussion and does not contain verbatim content from any specific source. However, for those interested in more technical details on the mechanics of noise in AI and its practical applications, they can explore a range of academic and industry publications that regularly document the latest advancements in the field.”
breakdown it and rearrange to prompt for image.

It is all I do

1 Like