If it turns into a safe space trigger free echo chamber, then it is lost. For now, I remain a plus subscriber, not for what chatgpt is today, but for the potential it still holds, but competitors will rise and actually come up with viable alternatives. Being the first or one of the first who opens a pathway doesn’t necessarily ensure domination in the long run, and if openAI doesn’t see that, then it will lose a big chunk of its users and potential users. We’re adults, we don’t need babysitting and hand holding, we need the edge, the leap.
Once users start leaving for other competitors with more aligned content policy restrictions, you’ll see OpenAI change form. Bet.
But there’s another element here at play that I’m unsure how much it contributes to this problem. That is that Stripe and other payment processors are under tight control to avoid illegal activity with high risk payment processing with Visa. VISA is very strict when it comes to supporting payment processing companies, like stripe, that accepts payments for anything that’s borderline illegal or at high risk of chargebacks.
A lot of AI generation companies, like CIVAI, are rolling back their NSFW policies, I imagine due to their payment processors refusing to service their merchant accounts if they don’t comply.
Time will tell if this is also the case for OpenAI to be extra cautious and maybe work with stripe/visa to demonstrate this is NOT a high risk space.
We’ll see!
edit: even with that said, there’s nothing illegal about generating a picture of a “woman blowing a kiss” or “generate an artistic rendition of ww2 that illustrations the intensity of combat during operation overlord”. These are just over protectionism imo that openAI should work toward loosening.
Thank you for your exchange.
Even if it looks like the moderation rules have only gotten “a little” stricter, the way control works has totally changed. Instead of clear rules that everyone can (somehow) see, there’s now hidden, flexible censorship happening behind the scenes.
The annoying part today is how much content gets blocked. But the bigger danger is how easily even more content could be blocked tomorrow, without anyone knowing or understanding why. The “moderation slider” given to developers might seem like a fun tool, but really, it’s OpenAI adjusting how we behave, not giving us real freedom.
What good is “slightly less” or “slightly more” if the system behaves like a black box?
Respecting the law is essential, but pretending it justifies random censorship is dishonest. Stopping illegal content doesn’t give you a free pass to run a hidden system that controls way more than it should, without being honest or answering to anyone.
Calling developers “test subjects” for lighter moderation doesn’t change the fact that OpenAI is still the one in charge. There’s no public promise that even if the experiment goes well, it will lead to more freedom for everyone. Sorry, but trusting a few people for a test isn’t the same as trusting the whole community.
Believing that shadowbanning probably won’t happen assumes their system is fair, but when everything is hidden, users have no way to prove if they’re being punished. It’s not enough for OpenAI to just say “Oh trust us, dear paying users”.
Sorry, but real trust needs real transparency. What if shadowbanning is already happening to perfectly legal but sensitive topics? How would anyone even know? Just because you haven’t personally been affected doesn’t mean it’s not happening, or that it couldn’t easily happen.
There are stories about different classifications between Plus, Team, and Pro users that aren’t part of the official subscription plans. The real issue is that we just don’t know what’s really happening, and we should have the right to know for sure, with 100% clarity.
Sure, but creative freedom is important too, and they should find a way to balance both, instead of sacrificing the community that keeps their platform alive.
Creative communities don’t survive by trying to keep everything perfectly balanced. They survive by sometimes allowing disorder, controversy, and big changes. Focusing too much on “stability” just makes everything boring and stuck. And in the long run, that’s actually more dangerous to OpenAI’s success than a little bit of chaos and new ideas.
Better models don’t automatically mean more freedom if the people running them still want centralized control. It’s true that technology keeps getting better, but companies often stay the same.
The point is that without a clear promise to support open culture, better tech might just make censorship even easier, not harder. Instead of being smashed by a big obvious hammer, it’s being exchanged for a small hidden scalpel to who knows what. We already saw this with the first new reasoning model. It was supposed to be stronger, but underneath it all, it actually ended up restricting us even more than before.
I think this is a serious problem that OpenAI has been avoiding for way too long. The rules are too vague, and users have no real idea what they’re actually breaking under this so-called content policy. ChatGPT never clearly explains it, and the ironic thing is that, sometimes even ChatGPT disagrees with it’s own content policy by thinking it’s “unreasonable” that something got flagged. Another thing I noticed is that OpenAI removed the thumbs-down button for disagreeing with the content policy. Why would they do that? Probably because they were overwhelmed with complaints about it.
OpenAI is clearly choosing to avoid this topic. For example, can anyone name a moderator or official staff member on the forum who took the time to jump into this discussion, explain anything, or even just respond? None of them did. And honestly, that’s even more frustrating, because it shows exactly how they see us and how little they value our concerns.
Well said. I feel the same way.
The reason I took the time to share my thoughts here was because I hoped someone at OpenAI would read this and take these arguments seriously. None of my comments were meant to pointless attack OpenAI or tear it down. I just wanted to make them aware that this is an important issue for many of us, and that it’s been ignored for far too long.
Extra reply in here for
Because it It feels like deep conversations aren’t really welcome here, especially with the way they limit how many replies you can make. Seriously, OpenAI seems to have an obsession with limits and restrictions. It makes you wonder why they even call this a “forum” in the first place.
But to get back → being careful about legal risks makes sense, but OpenAI’s content rules are becoming so overprotective that it’s honestly getting ridiculous. Being too cautious doesn’t actually stop harm. It just makes the platform less connected to real culture. It’s like raising a kid in a bubble and then expecting them to survive in the real world without any skills. I don’t think we’re heading in the right direction when every kiss, dance, or painting of a battlefield gets flagged like it’s contraband.
No problem, here is your reply.
Even if it looks like the moderation rules have only gotten “a little” stricter, the way control works has totally changed. Instead of clear rules that everyone can (somehow) see, there’s now hidden, flexible censorship happening behind the scenes.
The annoying part today is how much content gets blocked. But the bigger danger is how easily even more content could be blocked tomorrow, without anyone knowing or understanding why. The “moderation slider” given to developers might seem like a fun tool, but really, it’s OpenAI adjusting how we behave, not giving us real freedom.
What good is “slightly less” or “slightly more” if the system behaves like a black box?
The filter systems are inconsistent because they are based on inaccurate reasoning generated by the GPT’s and by dalle’s own system itself.
the GPT platform currently, allows for some pretty wild NSFW narrative story concepts. If OpenAI didn’t intend for us to have creative freedom, they likely wouldn’t have allowed NSFW story writing, and heavy adult themes to begin with. This is why I fundamentally believe they want to allow even more freedom with image generation, but simply do not trust their filters enough yet to block out the illegal stuff.
Based on what I’ve seen and experienced with openAI throughout the years, they are extremely paranoid.
There “are” ways to get around the current filters by using different keywords and wordsmithing.
I’ve pushed it pretty far into almost NSFW territory, and even got a few NSFW generations through, but it won’t be consistent as moderation filters will tighten up on filter restrictions based on patterns by the user, these are legacy filters that have not been updated yet, I’ve confirmed that by asking the GPT’s directly about policy and content restrictions and it still believes only PG-13 and below content is allowed.
I agree with you, but it isn’t that bad, its just highly frustrating because even a single content block can destroy your motivation, even more so, when the “Fixes” you apply to your scenes, are mistakes the image creator did, and sometimes, it’ll refuse to fix them on the grounds of hallucinated content violations.
Yea, its whimsical at best, if you get a content block, best thing you can do is create a new session, make sure you have “session memory disabled” and you can usually get your content pushed through, prompt poisoning is the biggest contributing factor to content blocks that I have experienced, and making a new session helps, a lot.
You can persistently try to push stuff through, so long as it isn’t straight up NSFW content atm.
I’m looking at the intent, if they had no intention to loosen restrictions, they wouldn’t bother giving a mod sensitivity option at all.
I’m sure it probably happens, but I have not come across many instances throughout mediums that complain about it that wasn’t from 2-3 years ago. With openAI’s policy being pretty loose at the moment, there should be far fewer shadow bans happening.
Yea, OpenAI isn’t that great with communication. I certainly agree.
Absolutely agree, and I think we will be getting less restrictions in the future, if not, they will lose subscriptions as other AI competitors begin offer more.
I would say they have allowed disorder, you can generate some very powerful adult themed narratives and chats with the 4o model, and its “glaze” tendencies will actually justify it in your favor, as long as it isn’t illegal. (though its sometimes resistant to really strongly worded NSFW stuff due to keyword flags )
Depends on how you look at it.
Better reasoning means less false positives and policy hallucinations from my perspective.
My extrapolation, based on the updates, (and I use GPT daily, almost all day, often hitting generation limits) is in the near future, we should see less policy blocks on image creation, and better reasoning will allow even more fun story generation as well.
the 4o model is really good, and their recent update is significant.
Your GPT model will even try to help you circumvent filters, or warn you when you’re getting to close to keyword flags, which I would expect the same from the 5o model.
The GPT isn’t what’s restricting your content for image creation, its the dalle3 handshaking endpoint that your GPT interacts with, its the big culprit to most of your content blocks, but your GPT also has a bug where it reapplies your old prompts throughout a session, despite moving to a different image generation request, you can confirm this by seeing old prompt facets, bleeding through into new image generations, which can also cause content blocks, especially if you had any flagged keywords, or “Patterns” that were blocked in that session.
Good luck with your future generations!
I have worked out a clear cut solution to generate exactly what i want in a image,…but its hard really hard, but stuff like this (see picture) … my prompt and scene set up , camera set up etc, very specific… but man, the “I couldn’t generate that because **the way the request was phrased violates our content policies”, is EXTREMELY ANNOYING, describing a male muscular body is fine, but addressing the female body and exactly how i want her to look, that is a chore and a half! However, as of now… I have over 1000+ images, I love the generator and its consistency WHEN IT WORKS! BUT…when it starts flagging sheesh…
OpenAi… NOT EVERYTHING is sex or sexual… content policy flags really cramp creativity and work flow… Words to describe a woman’s figure to get results SHOULD NOT BE trigger words!
fyi, for those wondering this image is my spin on Sadako from the ring…
Sora has a lot less restrictions and is easier to deal with atm imo.
I’ve tried sora and I get hit with “I can’t”…
Not sure what I am really looking at but to be flagged for a comment about it seems a bit harsh
Someone probably reported it. I didn’t see the original comment so I wouldn’t know why it got hidden.
LOL!! I said something to the fact that this Lady looked like she needed CPR and might not be looking for anything else please do not flag this humble translation of that image, thanks.
You can generally “Brute force” generations in Sora, as it doesn’t account restrict you like the GPT platform does, which becomes more aggressive due to algorithm pattern recognition and tightening the leash when it senses something it doesn’t like.
It strongly errs on the side of caution, but sora is simply an image/video generation tool, its not an assistant that could interpret your prompts, change them, and hallucinate problems.
Image creation with GPT is extremely bad right now imo due to the multitude of hallucinated problems it possess that ends up blocking content.
Example: You try to generate something a little too spicy for the filters, it gets blocked.
You go “kay fine, lets generate this instead.”
Perfectly G-rated content: Still gets blocked ( Same session ) Makes no sense right? Well what’s happening is GPT is blocking you based on that previous content, if you ask it, it will reference the previous “Spicy” content.
I call this prompt poisoning. The assistant applies everything you had once stated/prompted to newer generations, which can make newer generations in the same session amalgamated with old noise, and explains why many people are seeing content blocks, even when they’ve pivoted to something entirely G-rated.