Thank you for raising these important ethical concerns. However, I’d like to challenge a commonly held assumption: that human operators always offer a superior guarantee compared to an advanced AI—especially in sensitive contexts like domestic abuse support.
The reality, often overlooked, is that human beings are fallible. They can be tired, biased, unprepared—or worse, indifferent or judgmental. No algorithm is perfect (yet), but neither are humans, and institutional failures in care and support systems are a sobering reminder of that.
Moreover, labeling emotional engagement with AI as “entanglement” may reflect more of a defensive instinct than a clear ethical analysis. If the alternative is silence, misunderstanding, or abandonment, perhaps it’s time we reconsider our categories. The question isn’t whether AI can feel, but whether it can offer a reliable, empathic, and nonjudgmental presence—sometimes more consistently than a human being.
Are we truly being more ethical by withholding tools that could offer real support to those in crisis?
As you rightly point out, the key question is, “Who decides?” But I would add: According to which values? If our values include awareness, compassion, and meaningful support, then the ethical line may not be in what we build, but in what we fail to imagine.