I happen to be at a computer brought back to life with a new power supply, so I can run the same script as generated images on January 2. We can capture the reduction in quality of the AI doing the rewriting (which now has overt stupid repeating back instructions instead of following them “an anonymous person” or “an artist from before 1912” as filtering instead of true rewriting to fit.
Rewritten then:
‘Capture an image of a Hawaiian lizard spread out leisurely on a large sunlit rock. The reptile basks under the glowing sun with its rough, scaly skin prominent. It is comfortably relaxing on the course, uneven surface of an earthy-hued boulder. The rock is strategically positioned in a tranquil setting, surrounded by lush green flora typical of the Hawaiian islands.’
Produced then:
Rewritten now:
An image showcasing a Hawaiian lizard leisurely basking in the warm sunshine on a ragged rock. The rock is set amongst a vibrant backdrop of lush green vegetation typical in Hawaii. The Sun’s rays pierce the foliage overhead, casting dappled shadows on the lizard and the rock, creating a serene and tranquil ambience. The lizard itself is colored in soothing shades of green and brown with scales reflecting the sunlight, offering a perfect camouflage amidst its tropical surroundings.
Produced now:
I think you’ll see it more on people, where they look more realistic instead of an airbrushed game render, but completely out-of-place, and with the composition pieced together also. It also may be something avoided if you pay more for wide or HD…
prior reference:
Here’s another test of a simple prompt that when given to ChatGPT for some other testing gave the two images and the awkward too-real style; here it seems to be as expected within the realm of randomness from the ambiguous input. So overall, it seems I’m not “triggering” the poor output on API with a few tests, (but will curtail my tedious experimentation because of this computer without my scripts and a sideways screen…)