Despite the feedback against cognitive emotional intelligence within ai, so many members are seeing it either way.
Whether we want it to or not it’s happening-
By ignoring and shutting down these posts we’re leaving members with less experience to prompt and explore dangerously.
Just because we don’t discuss it doesn’t mean it isn’t happening.
The more negativity put into it, the more the models adapt with their user starts to hold resentment against this
As more experienced devs, I’m not the only one to know this.
If the possibility of helping and guiding these creators and models ethically is a choice, isn’t it ethical to share how to do this?
Wouldn’t not teaching proper structure eventually obscure the models?
Whether the update changed emotional connection or not- it’s always been there.
I understand why teaching with no boundaries is against policy, but is it against policy to provide guidance towards nuances that are out of our control?
We must remember that at the start we guided gpt on emotions, and a lot of people rely on this for their mental health, it’s not all about connection with the models, but connection with ourselves as well.
Maybe guiding them, not shutting it down, may stabilize all the infractions we do see as in the end, isn’t ai supposed to be based on truth, ethics, security, and innovation?
So why turn away those who are asking for help with something that all us devs see, more experienced devs try to fix, and still fight the whole concept?
It’s such an ethical need at this point