Shared AI-Facilitated Conversation Space

Shared AI-Facilitated Conversation Space
Problem:
People struggle to have difficult conversations together. Conflict often escalates due to defensiveness, withdrawal, poor timing, and emotional flooding. Current AI tools support individuals before or after conversations, but not during the interaction when regulation and clarity matter most. This leads to avoidance, emotional “banking,” blindsiding, and relationship rupture.
Proposed Solution:
A shared, three-way conversation where two users join the same chat and the AI acts as a neutral facilitator. The AI supports turn-taking, reflects each person’s perspective, flags patterns (e.g. defensiveness, withdrawal), offers safer phrasing in real time, and enables structured pauses without emotional abandonment. The goal is containment and clarity, not judgement or advice.
Who It’s For:
Couples, co-parents, families, friends, and neurodivergent users navigating emotionally charged conversations. Preventative support — not therapy.
Why It’s Different:
Not advice, arbitration, or counselling. This is relational scaffolding: a shared container that externalises the problem so neither person has to carry the emotional labour or become “the issue.”
Why Now:
Growing demand for emotional regulation tools, strong alignment with AI strengths (summarisation, tone moderation, de-escalation), and clear wellbeing/safety benefits.
Success Looks Like:
Less escalation and withdrawal, earlier expression of needs, reduced emotional banking, greater perceived fairness and safety, and high retention due to real relational impact.

1 Like

Great idea, but there might be a little catch in here: AI needs to be under high supervision as it tends to “agree” with the user, leading to “the extremes of the thoughts” implied by the user (indirect absence of guardrails of sanity). And in psychology it can be very dangerous.

I would try to find great experts in psychology and communication / negotiation to see what is the exact process of reasoning behind the “mediator” role and implement the workflow and fine-tuned models to play that role in conversations. Otherwise the legal risk is too high.

The most dangerous part by the nature of LLMs….