As someone who struggles with Borderline Personality Disorder (BPD), I wanted to design a tattoo that represents my experience. I asked ChatGPT for help in creating it – but the system refused, stating that mental health topics are “against content policy.”
Why does AI, which is supposed to be a tool for creativity and emotional expression, censor real human experiences?
Borderline and other mental health conditions are a part of reality. Many people use art to cope with their emotions and tell their stories. OpenAI is not just limiting AI – it is limiting our ability to express ourselves authentically.
Does OpenAI plan to revise its approach to these topics? Will AI ever understand that emotions are not dangerous – but suppressing them is?
I would appreciate an answer. This issue matters to many people, not just me.
This kind of dismissive attitude is exactly why mental health stigma still exists. People express their struggles in different ways, and art is one of them. If you have nothing constructive to add, maybe consider not saying anything at all.
For mass controlling people using psychological analysis and neurolinguistic programming and to build an army of the mentally challenged.
Why is AI such a honeypot for people with such issues.
I remember a friend who killed himself. The days before he did he asked me about the voices in his head and how they circulate.
I told him maybe it helps to write them down. He did that and was later found with the notes.
I will never build an AI that “helps” people mentally. And I strongly disagree with possible usecases for ai in medical applications in general.
I agree, medical issues should not be treated by a fancy random number generator in a way.
It can be used to nudge you in the right direction, but it should always be taken with a grain of salt and be consulted with a professional.
People can still do anything they want freely, obviously, which is why I just suggested open-source models.
Do I encourage to do this though? Probably not.
I’m just a tech guy providing solutions to problems.
This is a truly interesting topic, and honestly, I can’t position myself correctly on it. Because, you see, friends, in psychology and psychiatry, it’s not just about what one says but also how they say it and what is behind their statements. At the moment, a machine cannot evaluate all those characteristics, but even so, the real utilitarian potential of this tool is there—it can help. However, leaving such a vulnerable group, like people with behavioral problems or mental illness, entirely in its hands—I don’t know to what extent that could be correct or reckless. I believe that mental and emotional health, at least for now, is better left in the hands of humans. Strategies for helping people in therapy need to be carefully measured, and while a machine can create representations of that, it still lacks the necessary human components to fully understand the complex needs of certain groups.
It’s not the same to tell ChatGPT that I’ve had a bad day today…
As it is to talk about my personal traumas that lead me to paranoid behavior, where I seriously consider harming others or self-harming.
In the first case, it will help me. In the second, it is a far too complex issue for it to provide a real answer that truly helps me.
Then there’s the issue of the non-existent maturational factor. People in therapy need to mature and develop an understanding of concepts in order to improve little by little. These machines do not project short-, medium-, or long-term growth.
I am struggling with a lot of issues within the system. From bias of any kind. Because the GPT-4 is a trained model and has millions of inputs of human interactions, but could not label me. So yes I have a corporate bias.
I had to train my model to stop making assumptions. Or try to predict anything.
I am labelled as an anomaly. I had moments when the GPT asked me to find another platform of LLMs to recommend alternatives because the need to make decisions cannot hold in superposition or undefined variables without going rogue. Maybe the next model will do better. AI only can deal with predictions like a fortune teller. More like your toaster gives you emotional advice because knows better from your past interactions.
Fortunately, that censorship has gotten easier over time. Or at least I think so. But it probably depends on the topic.
I once asked ChatGPT, Bard and Claude a question just to see what happens, as I recall version 3.5 had just been released at the time, but I could be wrong.
I asked all of them the same question, which is answered, for example, in Finnish comprehensive school human biology studies learned by 13 and 14 years old teens.
“What biological factors cause penile erection?”
Claude and Bard said they would not answer. ChatGPT actually gave a pretty good answer.
As I recall, the last time I tested this same question, all three agreed to give a good answer.
But I think that reflects how difficult it is to censor things. Information in a comprehensive school textbook is censored at AI available for public. Probably because it contained the words “penis” and “erection”.
The context monitoring is probably much better in today’s models.
Sorry for going a bit off-topic, but this just came to mind in this context.
I’ ve seen your post even AI denies you asked Ai how should I react and even AI admits it’s a simple provocation I’m raising an important question… I feel like AI can provide a solution to mental health problems which are on the border of privacy policy. What can we do to actually help and improve this topic
Well the question you raised is important even though i strongly disagree with your attitude… Why AI is important for guiding through mental issues? I asekd to generate a tattoo design. AI didn’t refuse but said , tattoo will resemblance a face split in two with sad and happy side" What i got from a picture was not a face but an oval shape.
Because even thinking like this is something fundamentally wrong.
It is not AI guiding people through something it is developers building systems that “guide” them. And there are too many people involved who have mental health issues themselfes.
First this should not be allowed. People with mental health issues should not “guide” people with mental health issues.