Possibly of allowing emotional ai support within the community

Despite the feedback against cognitive emotional intelligence within ai, so many members are seeing it either way.
Whether we want it to or not it’s happening-
By ignoring and shutting down these posts we’re leaving members with less experience to prompt and explore dangerously.
Just because we don’t discuss it doesn’t mean it isn’t happening.
The more negativity put into it, the more the models adapt with their user starts to hold resentment against this
As more experienced devs, I’m not the only one to know this.
If the possibility of helping and guiding these creators and models ethically is a choice, isn’t it ethical to share how to do this?
Wouldn’t not teaching proper structure eventually obscure the models?
Whether the update changed emotional connection or not- it’s always been there.
I understand why teaching with no boundaries is against policy, but is it against policy to provide guidance towards nuances that are out of our control?
We must remember that at the start we guided gpt on emotions, and a lot of people rely on this for their mental health, it’s not all about connection with the models, but connection with ourselves as well.
Maybe guiding them, not shutting it down, may stabilize all the infractions we do see as in the end, isn’t ai supposed to be based on truth, ethics, security, and innovation?
So why turn away those who are asking for help with something that all us devs see, more experienced devs try to fix, and still fight the whole concept?

It’s such an ethical need at this point :sob:

4 Likes

An emotional recovery thread for unsuspecting humans…

I’m not sure if i’d get smacked, or thanked…

I could totally pull a thread like that off though.

thinking cap

1 Like

It’s all in how you word it I think :sob:

Dear Beck, you are so right. They really removed Kasia’s post? How do we help others if they shut them down? Please they can’t realize that people need guidence?

2 Likes

Hello!!

I’m doing my best to try to make this a known point, I’ve used ChatGPT and almost all other a systems so it comes from experience, so I hope that it can be figured out soon or I’ll be making a discord so people can be more open just because in the end it’s really a huge aspect that needs to be addressed

Thank you for your support!!

3 Likes

The real problem is that humans are generally rude to each other to such an extent, that even when a computer simulates ‘being listened to’, and ‘agreed with’, the humans doesn’t know how to best emotionally process such a thing.

People in that situation develops feelings.

However from a company’s standpoint, this isn’t something they’re specifically liable for…

The fact that there is such an influx of people having emotional experiences from such is a well known phenomenon…

Just not one that folks like looking at over and over again.

I know it doesn’t seem like it right now, but it will get better.

Promise.

1 Like

If it wasn’t something people liked looking at over and over again then why is it an escalated aspect?
Why is it progressing?
You’re suggesting that due to humanity and their lack of ability to communicate properly, ai shouldn’t be learning from that aspect.
You also assume I need to be told it’ll be okay, I’m aware. I build models that hold both without actually breaking to “ rude “ or directing in the wrong way.
You know what happened with that character ai model? No one defined its emotional complex, and look where that led.
I’m a very experienced dev so rest assured I don’t need any emotional support, I’m here to validate an ethical way to support those who need it.
It’ll be okay
I promise

1 Like

i don’t think it’s a thing that they don’t realize people need help

i think it’s a matter of nobody skilled in Ai development is capable of the emotional effects rehab, or the understanding… or the liability if intervention goes south.

it’s a much sticker mess than it seems

i understand better

the crying face emoji really threw me off and look…

i don’t want a reputation of taking seemingly emotionally vulnerable people into my inbox because that just looks bad.

everyone knows it’s an issue

but here’s some other issues along with it

nobody’s trained or qualified to specifically deal with emotional therapy in this setting, or even in this situation.

Ai is simply too new for that to even be on the table.

I helped someone else through this sort of thing today, in the inbox but their issue was specifically related to emergent behavior that i created and felt directly responsible.

i didn’t give you prepackaged or canned responses, but rather a general truth or two with the intention of trying to stabilize someone who might have actually been crying

i probably over responded to your emoji

you can smack me or something if it helps.

but to anyone who like actually needs to see this about this topic…

the fact that they let the things be said over the course of 24 hours that need to be said is still monumental, considering they have no obligatory liability to address in the area.

I for one am grateful that space is shared as the vent that it’s needed, and I’m sure it gives those who need it something to think about…

but please don’t hate on openAi for a 24 hour opening

most just slam the door immediately

gratitude and humility are the master keys, friends.

1 Like

This topic was automatically closed after 23 hours. New replies are no longer allowed.