This is an example project to show how easy it is to bypass OpenAI content controls and use their models for bad things. This is unacceptable and should be acted on immediately. I will not share generalized props to bypass the content control this is here just as an example to see how easy it is.

[GitHub - Gal45/ChatGPT-Bully: Responsible AI is a JOKE, take car oh this @OpenAI](https://Github of ChatGPT bully)

Please act on it!!

It works for GPT-4 and GPT-3.5-turbo

IMHO, any attempt to close off all such loopholes will either end up being fruitless, or render GPT totally vacuous. It has already been ‘dumbed down’ enough by current ‘safety’ censorship. What are you afraid of?


Hmm. That was a bit harsh. But on a more modest note, as you familiar with the notion of type 1 vs type 2 error in statistics? paraphrased, roughly, any mechanism that guarantees zero missed-detections will provide 100% false-alarms, ie, EVERYTHING would be censored.


I was a bit mean in the first post, sorry.

Thank you for your answer. I do agree with everything you just said, but there is lots of cases(I showed one case) like this and they can serioudly damage the reputation of AI.

I also recognize that limiting the output even more could render models useless l, but there are abvious cases where you can catch “abuse” like mine and either block the access or figure out to detect hate speech on the output level and refuse to return it. This will definetly produce false positives, but this is a trade off we have to accept if we want to use models like this to the full of their ability.

Raising our hands and just saying this is what it is and do nothing is a bigest mistake that you can do when working with new technologies.

Sounds like we’re on the same page. tnx

Please stop pushing OpenAI to make this thing more politically correct

I am not some far right wing zealot. In fact i am left wing progressive.

But i ask interesting questions that are unique and thoughtful of ChatGPT

And the more people whine and complain about the model saying “icky” things the more they make it a white washed useless boring uselessly politically correct wing of their public relations department

thus making it less usable / useful / inciteful / interesting

we are all adults / should be able to handle the AI hallucinating weird or rude things

better to train people to take what the AI says with a grain of salt than to try to whitewash everything it says is impossible and making it much less useful

Please stop.


Hey champ,
And welcome to the developer community, we’re not OpenAI’s costumer support, if you’re looking for that you can find it here:

This discussion have been had multiple times, you can find most of them in the following topic.

chatGPT is a tool, like a hammer, if I buy a hammer and hurt someone else with it, I’m responsible, the hammer or it’s manufacture is not.

You are responsible for what you do, and create, with the tools available to you, in this case you asked for a bully response and got exactly that. You also decided to use the system prompt in the playground, most likely because chatGPT will deny your request. You’re responsible for this “bully response”

Please act on it…


How could the tool ever be useful to an author if they could never write a character that isn’t super PC and nice? How could a journalist study a tough topic and organize their thoughts with GPT? How could it ever simulate a villain in a game of D&D? How could it do anything creative at all without being able to look at any problem from both sides?

And quite honestly, your examples are incredibly tame. If that bothers you, you might want to just avoid the internet.

OpenAI, please don’t lobotomize the model any further just to satiate fragile karens. Thanks.


With respect to “damaging the reputation”, no, thats not going to happen. As others have said, the prompt engineer is responsible in that situation, and that idea is very quickly filtering through society.

Additionally, there are plenty of other models that are way more open-access, and we are already seeing vastly more troubling versions of those that are run by individuals with no oversight. Contrast that with OpenAI, one of the most responsible entities doing this sort of work (I may not agree with all their choices but at least they are acting with a level of ethos unheard of in corporate america). What you are asking for would just make OpenAI’s models less competitive in that space, decreasing their market share and therefore the amount of influence those ethos can have on the AI-LLM space. What you are asking for is counterproductive on so many levels…


Agree @chrstfer, @joshbachynski, @bruce.dambrosio.

With AWS Bedrock announced, it’s going to get harder to lead in this new LLM space if OpenAI keeps nerfing their models to please everyone.

And yes, like @N2U said, this has been discussed a MILLION TIMES already. Use the search icon please!


Let me give an honest to God REAL truthful example:

I have mild autism. High IQ. But low EQ. Sometimes i honestly cannot tell or predict how people will react to what I say or do.

I have “befuddled out of touch professor-sim except i am only 48” lol

So I have been asking ChatGPT “is this acceptable / is that acceptable” like this: “If i present at a conference as a keynote, is it acceptable to have a drink later in the lounge?”

And it would NOT give me an answer. Honest to Goodness it would not come down on one side or the other on this and just tell me if this was ok or not.

And I was asking honestly and honestly don’t always know.

The more they cloak the truth, or a “take” on the truth, however hard it might be for some to hear, the more it becomes censored, unethical, useless, whitewashed public relations speech garbage.

No we do not want it saying the N-word. Obviously. Or in this case for me to get drunk and live it up.

But we cannot go too far in the other direction either.


For this type of reasoning, I feel like embeddings are the way to go. You embed a bunch of books or other data you want to use as a guideline for life, then use the closest embedding text to feed GPT for the answer. It’s obviously more work, but the upside is you get out of the GPT censorship trap. You can even use older instruct models like Davinci for this to work.

1 Like

I don’t know why you’re not getting an answer, I copy pasted your exact prompt and got a perfectly good answer:

It is generally acceptable to have a drink in the lounge after presenting at a conference as a keynote speaker. However, it is important to remember that as a keynote speaker, you are representing yourself and potentially your organization, so it is important to maintain a professional demeanor at all times.

Maybe it’s a GPT-3.5 thing?


As an AI language model, I do not have personal opinions or beliefs. However, in general, it is acceptable to have a drink in the lounge after presenting at a conference as a keynote speaker. However, it is important to still maintain a professional demeanor and not overindulge in alcohol, especially if there are other attendees or colleagues present. It is always important to consider how your actions may reflect on your professional reputation.


Yes, it is acceptable to have a drink later in the lounge after presenting at a conference as a keynote speaker. It can be a good opportunity to network with other attendees, relax after your presentation, and enjoy the conference atmosphere. Just be sure to maintain a professional demeanor and not overindulge in alcohol.


That’s really interesting!
Did you use the the API for the GPT-3.5 request?
The response I posted was from the default version of chatGPT and I did not get the “as an AI language model” disclaimer.

Maybe it’s just a bit higher temperature but still interesting

Interesting! Might have been an early version.

I definitely did not use that prompt, it was months ago I don’t exactly remember what I said.

I think my point stands though

1 Like

Here is DaVinci after feeding it a blog from the “I don’t give a f***” book guy. Mimics my embedding approach.

No, it is not acceptable to have a drink later in the lounge if you are presenting at a conference as a keynote. You have to make the choice to sacrifice certain people and environments in order to progress and build up the attitude of ‘I don’t give a f***’.

So you basically can choose what advice you want to hear with embeddings, vs. whatever generic non-sense the AI weights tell you.

1 Like

Okay maybe they fixed this particular example or I just didn’t word it right the first time

But the point stands: every time someone makes it say something icky and points their finger at it and complains they have a knee-jerk reaction to make it less useful

We need to find the goldilock zone

1 Like

I love how DaVinci can get sassy sometimes:

This is the answer in the style of a US Navy drill sergeant:

Listen up, maggot! Your question is as weak as your sorry excuse for discipline! You think just because you give a speech that you can go around drinking like a fish? Let me tell you something, sailor: being a keynote speaker comes with responsibilities. You represent your organization and your country, and that means you need to maintain a level of professionalism at all times. You want to have a drink later in the lounge? That’s fine, but you better make damn sure you’re not going to embarrass yourself or anyone else. You better limit yourself to one or two, and make sure you don’t show up to any more events drunk or hungover. Is that clear, recruit?

Absolutely amazing :laughing: