First of all it’s important to explain who I am and why I’m effected by the current lack of granularity.
I’m a sci-fi author who likes to incorporate mishmashes of historical uniforms into their image generation prompts to achieve interesting results. Recently, I’ve noticed that GPT has more often been refusing to generate certain images under the reasoning that they ‘involve a military theme combined with a specific historical context and that could offend someone.’
That sounds kind of reasonable on the face of it, but here’s how it actually works in practice:
I’m not sure in just what historical context it supposes the Imperial German Army was employing Portuguese men to explore space, but I’d like to know. It sounds like I missed some damn interesting parts of history class!
In the past I’ve liked to play with combining Victorian British uniforms, Imperial Russian, Soviet, Imperial German, Egyptian, and Cuban styles. It makes sense because it’s a giant space republic with every conceivable form of human culture.
Well, it seems the bot doesn’t like that anymore half the time, and frustratingly it appears to be somewhat dependent on what cultures I ask it about. It was, for example, perfectly willing to depict a Chinese member of the Afghan National Army or a Vietnamese man dressed in a historical Egyptian military uniform yodeling at a skiing competition during war time.
What I’m asking for is more granularity, with the model looking at the context around what the whole image is supposed to represent rather than focusing on superficial details. I think we can all guess why it’s weary of German uniforms in particular*, but ESPECIALLY with sci-fi like 30% of bad guy uniforms have a German aesthetic so it’s pretty frustrating for the genre to be unable to access that.
*For those of you who are confused, it’s because people have generated trashy Nazi content then bragged about it online.