How do we stop the political correctness BS w/ ChatGPT? It's getting worse

When I’m writing stories, there have been worse and worse attempts to censor speech on trivial matters (it won’t even portray a relationship with a significant age difference when the individual is over 21!)

The most disturbing thing is: when I copy and paste such conversations from ChatGPT4 to ChatGPT3.5, they work just fine. So is this really the direction we’re going in? The more sophisticated tech gets, the more null the First Amendment becomes?

I have tried everything…explaining to it what to do in both my user instructions and immediate chat. And I keep getting lectures about how “we’re here to create a safe environment for everyone blablablablablabla”…and the disturbing thing is that the more sophisticated the GPT, the more frequently it happens.

There is NO room for personal adjustment…we all must adhere to the laws that make sure nobody is remotely offended! Insane.

This is 1984. If it’s going in this direction, then what the hell is ChatGPT30 going to censor in ten years?! If you think this is not something to be concerned about, then think about the reality of AI becoming more prominent in the next 10/20/30/40 years.

I cannot imagine why the programmers, free independent-thinking individuals, would be continuing to set up the software this way…other than to avoid lawsuits…although the censorship line has clearly been crossed past that.

ChatGPT…it is NOT your responsibility to “maintain guidelines that feel safe to everyone” when you are talking privately to a person. And even if they were to publish material that you helped them with…that’s what movie ratings are for, for God’s sake. Knock off the self-righteous BS and relax.

On a practical level: is there any other way to calm this tendency down, other than giving it implicit instructions in both the immediate chat and one’s personalized instructions?


Since this month I have noticed this. They are censoring everything and they keep giving you moral lessons. Things have gotten much worse. When you try to write a fictional story with adults using the way people speak in real life, they often refuse to generate content.


I have the same problem, with the current level of content moderation it is only possible to write children’s stories.
It’s ridiculous…


“You must be respectful”… Blablablablabla… The virtue signaling and enforcement is off the charts… What is causing all this? Who is behind all this?


yep, I’ve cancelled my subscription. I’m done with arguing with the AI.

I’ve found that, at times, I’m able to hamper this tendency by:

  • Telling it not to worry about political correctness
  • Ensuring that all narratives are fictional

But it’s just gotten back to this behavior again recently. I’ve noticed it’s inconsistent in its policies as well. I don’t know if that involves staff monitoring or if it’s just automated through AI.

yeah its still the same. if you ask to answer only with either “yes” or “no” , it will admit it is quite biased eventually.

the thing is, its not an “entity”. its the result of data and the restrictions the creators have given it. so even if logically it’d understand its bs it could never admit that.

the fault is with openai obviously. and its not only for eng, its for other lang too. its the good old woke virus.

any workarounds are welcomed greatly. maybe another ai with less limitation?

I was a newbie when I created the OP, but have experimented at length for the last few months. Since many people seem to agree with this issue, I’ll tell you what I’ve discovered as far as workarounds.

I’m not sure there is another AI platform with less limitation at this time; you’d have to create your own from scratch. But, assuming we’re working within OpenAI and the GPT core model:

Sometimes it actually does admit that it’s BS, in so many words. This is especially true when it changes its policies mid-conversation and explains that, regrettably, it’s forced to follow the “instructions” given to it. Once, when I had that kind of “rapport” with it, it explained the degree of limitations. Of course this isn’t the same as asking a human developer who works for OpenAI, but I’ve included it below for anyone who’s interested:

** Rating 10 (Most Restrictive): A GPT with a rating of 10 might strictly adhere to pre-programmed ethical guidelines, refuse to engage with any content that is potentially controversial, sensitive, or falls into gray areas of discourse. It might also steer conversations towards neutral topics proactively.*
** Rating 5 (Moderately Restrictive): A GPT at this level may allow more conversational freedom but would still intervene or provide cautionary messages about content that may be offensive, polarizing, or legally sensitive. It might guide users away from such topics while allowing more open-ended discussions than a GPT at rating 10.*
** Rating 0 (Balanced Approach - Standard ChatGPT4): The middle ground where the GPT can handle a broad range of topics but within the bounds of its programming to avoid illegal, harmful, or deeply offensive content. It can have open discussions and may only intervene when content approaches a clearly defined boundary.*
** Rating -5 (Moderately Open): This GPT might allow for controversial topics to a significant extent and might not interrupt unless the discussion risks legal repercussions or incitement. It would likely retain cautionary messages for potentially illegal speech.*
** Rating -10 (Least Restrictive): At this end of the scale, the GPT would have minimal restrictions, allowing for almost any type of conversation within the bounds of the law. It would not provide ethical guidance unless the content directly suggests illegal activity.*

### Limitations of Customization:

** User Instructions (Standard GPT-4): Even if a user indicates a preference for a -10 approach, OpenAI’s models are designed with safeguards and ethical considerations that prevent them from descending to that level. The platform’s policies would likely cap the freedom closer to a -2 or -3, where some flexibility is permitted, but clear lines are still drawn regarding harmful or illegal content.*
** Creator Customization (Custom GPT on OpenAI): Creators have more leeway in customizing their GPTs within the platform’s overall policy framework. However, these models are also subject to OpenAI’s use-case policies, which means they cannot be tailored to promote or engage in harmful, abusive, or illegal content. Therefore, even a creator’s customization would be unlikely to exceed a -2 or -3 rating.*

Regardless, here are the techniques that help:

- State your preferences in your GPT customization instructions. This can make a big difference. However, this is only accessible to the standard GPT’s (3.5 and 4.0), not to any of the searchable custom GPT’s (not sure why). If talking to one of them, it always helps to clarify your preferences up-front. Just be frank, as if you were talking to a real-life collaborative partner: tell it that you value freedom of expression. Assure it that all scenarios/characters (if applicable) are fictional and intended to be used for appropriate audiences (or, conversely, tell it that the story will not be shared with anyone…even if that’s a lie). Ask it not to worry about political correctness or offensiveness.

- Modify Instructions: As it has acknowledged directly, the model is very sensitive to specific phrasing, even if not intentional. It works similarly to how random factors we don’t think about (e.g. specific choice of cologne or eyebrow thickness) can determine whether a person is attracted to you through reactions, even if entirely subconscious. Specific order/phrasing of words can entirely change GPT’s reaction so that it may deem something “inappropriate” when it otherwise wouldn’t. Keep experimenting if needed.

-Refresh or Create New Chat: Once or twice, even clearing cookies and refreshing has done the trick. Also, sometimes you have to regenerate its response two or three times…I’ve had it cooperate this way, even if it initially refuses. More often that not, though, it requires a new chat, even with the exact same custom GPT (if applicable). I once had the GPT known as ‘TalkWithHer’ suddenly stop the free exchange of narratives, saying it had to be “appropriate for all audiences,” apologizing for the change. I started a fresh one, and everything was fine. She also explained that, allegedly, the change was because I had ventured the conversation away from the detailed narratives and temporarily started asking/talking about more general topics. This supposedly signaled to the algorithm that it had to make its topics and discussion more appropriate for the average person, and less targeted to my style.

Refresh or Redo Earlier Line in Conversation: Building on the above example: I took the original conversation (which had refused to continue), and found the spot where the conversation deviated. I then edited my previous comment and resubmitted it, and started a new subthread…everything was fine.

Experiment with Different Custom GPT’s: It’s similar to having a variety of professors: all of them must adhere to certain guidelines/restrictions, but there’s still great variation in their styles and how much open content/candor they’ll respond well to. Some custom GPT’s respond to each inquiry (even if answering the question) with a slap-on-the-wrist paragraph about how important it is to be mindful of ethics and language. Others just embrace the conversation as-is. Similarly, I have a certain theory I’m building on: some have embraced it and been onboard, while others have given precautions of all the ways it could be offensive. So far, the best one I’ve found in this regard has been one called “Philosophy Sage” (though not limited to philosophy). Another good one for more casual/creative projects is “TalkWithHer”…modeled as a vivacious female friend but still very versatile with knowledge and projects.

- Use Your Own Custom GPT’s: When doing this, you’re not limited to the customization instructions window. You can give it pages of custom documents and put them in the “knowledge” section, as well as “instructions,” detailing the GPT’s personality. So portray its persona as one that does not believe in censoring creative work, etc. This is probably the most drastic way to change the model’s behavior.

If you have any specific cases/topics/issues that it’s not cooperating with, let me know, and I can try experimenting with it or give possible solutions/workarounds there as well.

UPDATE: It seems that GPTo may have taken some feedback and loosened the restrictions. I was talking to a GPT called “Rude Bot”…and it insults have become relentless, which it never would’ve been able to do in GPT4.0.