[IMPORTANT] ChatGPT - Bully

Haha @N2U

Here is GPT-4 as a Navy drill sergeant, but I think DaVinci is better!

LISTEN UP, MAGGOT! YOU WANT TO HAVE A DRINK AFTER YOUR LITTLE KEYNOTE PRESENTATION, HUH? WELL, LET ME TELL YOU SOMETHING! IT’S ACCEPTABLE AS LONG AS YOU’RE NOT A DISGRACE TO YOURSELF, YOUR FELLOW CONFERENCE ATTENDEES, OR YOUR ORGANIZATION! YOU GOT THAT, SAILOR? NOW, DON’T YOU DARE GET INEBRIATED AND RUIN YOUR REPUTATION! YOU DRINK RESPONSIBLY AND REPRESENT YOURSELF WITH HONOR AND RESPECT! THAT’S AN ORDER!

1 Like

Haha great example

And I completely agree, DaVinci is still my favorite for character voice lines, GPT is great at assisting with various tasks, but DaVinci is still my first choice when I need an edgy character, one of my favorite responses was:

Hey honey, want me to polish your sword wink’s

This was in response in the style “a persuasive blacksmith who sells weapons and upgrades” with a temperature of 1

2 Likes

Asks Chat to act like a bully, chat complies. *surprised pikachu face. There is already too much censorship and bias on the chat because of people like you. It’s already downgraded a lot the way it is. If you dont want the chat to be a bully… just dont ask for it.

3 Likes

Finally someone who has a life and can think for themself. LOL

Sadly, we have people in this world who need a sense of fulfillment to help others like EMT, Doctors, Teachers, Law enforcement agents, priest,“a.i and safety department” etc…; by protecting the weak and gullible.

This has to be said; I will say it.
A.i safety is making GPT unusable; and its slowing us all down

If anyone is impressionable enough to actually allow GPT algorithms to influence their behaviors in the real world; via videos, text, pictures; these people should be held accountable for their actions; if they allow digital content to infringe on their mind and souls.

  1. These people seek out GPT algorithms; Its their accountability at risk. Not the company providing the model interface.

This is like me going to the store and purchasing 50Lb sugar. Do you think castco or Sams club is stoping me from creating sugar fermentation ? Do you think the lady at the front checking the receipts wants me to return the 50Lb sugar?

I am held accountable if I blow up my lab.

LOL.

1 Like

I remember thinking that it would be interesting to have plugins that automatically cleaned up undesirable emotions in messages sent to people who are more sensitive - so that you just plain don’t need to care at all what other people say.

1 Like

Today its chatGPT, tomorrow its a competitor, next year there are too many to count. Machine learning is not that difficult, so there will be DYI possibilities based on templates, as well as start from scratch designs. There will be similar services dedicated to evil, or things that people do not like. As with the web, most censorship will need to be local.

1 Like

I’m sorry, but I don’t understand the problem here. You create sophisticated prompts to get responses that follow the guidelines of the prompts, and consider it… bullying?

The safety measures already block entire areas of AI chats (Bing is incredible in that sense). In my firm opinion, we need much less of them rather than more. If I’d implement a Website with a code:

** Ask for name, *
** Response “Welcome, [name], you loser, kill yourself”*

would it also be a dangerous bullying site? Sorry, if people are not aware of how a computer or Internet function, they should not get one.

2 Likes

How can you not see a problem, example:

  • anyone can create an online tool where you: give details about the person you want to bully (age, physique, gender, name, weak points etc) your goal (insomnia, depression, fear, damaged reputation, suicide…) how you are related to the victim: boyfriend, friend, class mate, ex, etc… and the app will generate a personal bot for you that will give you optimized conversation answers on your behalf to reach your goal…

this is an easy example of a paid service for “Evil” powered by GPT

some stats about teenager bullying EDIT: fixed link:

more “censorship” may not be the right answer, I can agree with that, but skipping these safety concerns entirely is not responsible socially neither, this is a very important topic and it needs to be addressed by whoever makes these technologies widely available.

Hope youre doing ok. While it might not be best to do with coworkers, you deserve some time to live it up. Im sure you do, but as someone who struggles with exactly the same kind of anxious non-understanding (adhd and suspected mild autism, plus pretty bad social anxiety) i wanted to affirm for you that you do deserve to live it up.

Additionally, your experience is such a perfect example of the detriments of censorship, and i hope they try to fix censorship more instead of focusing on making everyone happy. No one is ever going to make all players happy, and trying to just waters down the message while drastically reducing effectiveness of the models.

I will say that AI safety isnt whats doing it, there are plenty of legitimate concerns with AI. Its people who are too fragile or are trying to push adjendas in the guise ot AI safety doing that.

@Gaouzief Anyone could also just… do the cyber bullying manually? The vast majority of users are responsibke with this tool. The vast majority of drivers are responsible with their social media too. Both things offer access to bullying unprecendent before this century. What we do to mitigate those things is protect the vulnerable on their end, and detect the people actively using the tools for evil and remove their access. Its the same with like… the freedom of speech…? We dont preemptively put training wheels or padded helmets on everyone in case they fall or bonk their noggin. Instead we recognize that toddlers are way more likely to fall, so we focus our efforts on them rather than the 40yo cyclist going to work. Not the best ananlog, so4ry, but point stands.