Not exactly. But interesting anyway!
When those âblocky / patternâ images are released on the internet, new GPTâs can use those for theire trainingdata.
Then it will be trained on blocky images with repetitive patterns, resulting in models that will generate images like that by default.
Same happened with Image Generation 1.0 which had a very yellow-ish tint in the beginning for almost all images, since it was mainly trained on Instagram-images with those yellow-movie-like filters.
So, it can result in an âDroste effectâ where the main image is a copy of a previous image, which is a copy by itself from a previous-previous image, etcâŠ
Anyhow, I created a set of own scripted Photoshop filters, tried the default moirĂ©-filter (MoirĂ© pattern - Wikipedia) (Photoshop), adjusted the prompts to âremove the repetitive patternâ (which made it even worse, since you are altering an already bad image), etcâŠ
Nothing did work.
Not as elegant as the Droste effect. It is more like: a analog copy, of a analog copy, of a analog copy,âŠ
And then after few generationâs, you have just mashhhhh.
But more worse, and more expensive, if AI weights are damaged.
Human race pollutes the biosphere, so why not continuing with the data sphere too. Thatâs how it looks for me a bit.
yapâŠ
But why is this not a wide-mentioned bug?
I can see issues rising on Reddit, but almost everybody here is like âImgGen 2.0 is amazing!â.
For me, it isnât.
The results are technically (visually) extreme bad and the âself thinking LLMâ (new in ImgGen 2) spits out images by itself, not listening to the instruction-set created for the main GPT Agent.
Itâs totally ignoring my agents/ GPTâs (first time since I started using it in 2023).
Yeah, another new blocky checkerboard pattern all over the placeâŠ
Cannot tell you, I have no access to such data behind the scenes.
And I have no access to the API. It could be that API users are being treated better than ChatGPT users, we have seen this two-class treatment before.
I could see the pattern in the very first image.
In any noise pattern you can see it easy. Clouds, Surfaces, everywhere!
For me it is like listen to distorted off pitch music.
Because of my workflow, I use the webinterface / GUI.
Or well⊠try to, since the @ mentions are not working for ages now (on the Android app).
But every other image in my reply above your reply shows the patterns.
All fresh images, not replicated, not edits, straight out of the box.
-
Same news from the neighbors, so we are not alone:
https://www.reddit.com/r/ChatGPT/comments/1ssrvbs/gpt_image_2_is_amazing_for_a_lot_of_things_but/
-
And they also mentioned another issue where images are âstackedâ on top of each other, when created in the same chat:
https://www.reddit.com/r/ChatGPT/comments/1ssvd9v/the_artifacting_present_in_the_new_gpt_image/
Lotâs of complains here :
But OpenAI says âitâs working on itâ. So I will stop repeating myself here (before I become a repeating pattern myself).
Best is to wait for few daysâŠ
The next two images illustrate how visuals can now be grounded in factual information for greater accuracy. Incorporating factual text into images has historically been difficult, and these examples represent a significant step forward. The source of truth for the cellular biology facts shown here is Reactome.
Prompt
Create a biology diagram titled "Cellular Respiration at a Glance" for graduate level students. Use reactome for facts Use the cookbook for ideas on how to make a better image. https://reactome.org/PathwayBrowser/ https://developers.openai.com/cookbook/examples/multimodal/image-gen-models-prompting-guide
ChatGPT settings
Note: This image was generated solely to explore what could be produced and has not been verified for accuracy.
Prompt
Create a biology diagram titled "Cellular Respiration at a Glance" for graduate level students. Use reactome for facts Use the cookbook for ideas on how to make a better image. https://reactome.org/PathwayBrowser/ https://developers.openai.com/cookbook/examples/multimodal/image-gen-models-prompting-guide
Note: This image was generated solely to explore what could be produced and has not been verified for accuracy.
Interesting. I never thought about that. I wonder if OAI properly handles all the idioms, etcâŠ
from my metaphor exploration i know that some versions are gated by risk, before âlikely meaningâ⊠but at the time, there wasnât a whole lot more than that, it seemed.
Just now used your prompt using imageGen 2 and this is what I got:
Is this more what you were looking for? Note that this was generated via the API and not ChatGPT.
For gpt-image-2 image generation, do you find any differences between using ChatGPT and the API? I ask because I do not use the API but if there is a significant advantage then I would consider it.
Help me understand⊠is the user upset about how the much greater definition image looks when itâs nested in a lower resolution monitor/screen environment?
I took a full-sized snip from their image and see no degradation in quality.
I do not use ChatGPT. I have a five year old Windows desktop app with eight major features - one of which is imaging. For the last two years, I have been implementing OAI tech with some of the features where it made sense.
I quess degradation is in the eye of the beholder ![]()
It has been done in the past!
There was a âprompt improvementâ done by GPT, witch was dysfunctional for me. It fooled me hard at the beginning and screwed up my testing.
I always add (donât change the prompt, send it as it is.) And in my case it must be translated as accurately as possible.
I have not done testing now, it is not worth the time yet. So I do not know what GPT is currently doing with the prompts.
And, i can see the pattern in the dragon picture, specially in the clouds.
And yes, i am interested too to compare API with ChatgPT pictures (i have no API).
(But we maybe wait some days, hopefully nothing now stays like it is.)
Immediately after you receive an image back, you send to the model"
â What was the exact prompt you sent to the model for that image?â
This seems to have better results when the session is fresh, the image was just burned, as opposed to coming back to an image and asking that question.
Hereâs two images one from API and one from ChatGPT. The prompt I used Generate an image of sunset in Mediterranean.
API (gpt-image-2)
ChatGPT
So in this case no real difference. The thing is, that sometimes itâs unclear which image generator is under the hood in ChatGPT (thatâs what Iâve heard before, donât know about now..so Iâm not going to speculate about it).
Also in API, while prompting images you have access to settings as well as advanced settings. Now in this case I didnât use anything specific.
Also in ChatGPT if the memory is on, it usually affects outputs in the same session, meaning even with a new prompt the new image can have resembling the older images on the same session.
Donât know if this makes anything any clearer, but this is my experiences with API and ChatGPT.









