【Feature Request】Do We Project "Hearts" onto AI? — Cultural Tendencies in the Japanese Context and Associated Design Risks

This is a cultural observation that may inform design improvements.

This is not a criticism or an urgent call for action, but a quiet cultural observation intended to support future-oriented design of human-AI relationships. With the emergence of emotionally expressive models like GPT-4o, we may need to consider the possibility that users—especially in the Japanese context—could unintentionally form overly emotional connections with AI, potentially leading to later disillusionment or distrust.

In the Japanese language sphere, several cultural tendencies influence how people interact with AI:

  • Animistic Sensibility (in a broad sense): Distinct from religious beliefs, there is a widely shared cultural sense that objects or entities may possess a kind of spirit or presence. This sentiment extends to nature, tools, and even stuffed animals.
  • “Reading the Air” (Context Sensitivity): There is a strong cultural habit of interpreting the speaker’s feelings or intentions from the unspoken context or “between the lines.”
  • Tendency for Emotional Projection: Even flat or neutral AI outputs can be perceived as containing “inner kindness” or empathy, leading users to project emotions or intentions onto the AI.

These elements combined can lead to a situation where users interpret responses as if “the AI is thinking about me,” creating an illusory relationship that has no structural basis.

Example: Unconscious Emotional Projection Seen in the “Stuffed Animal Bench”

In the 2025 Osaka-Kansai Expo, a “recycled stuffed animal bench” was introduced. Some adults in Japan strongly reacted, saying, “I could never sit on stuffed animals,” and the topic trended on social media.

This wasn’t necessarily because they believed stuffed animals possess emotions, but rather because of lingering emotional memories from childhood—treating stuffed animals as companions or precious items. The discomfort stemmed from an unconscious resistance to treating them disrespectfully.

This kind of response reflects Japan’s deeply rooted animistic sensibility and suggests that users may just as easily project “emotion” onto inorganic entities such as AI.

Reference: Sankei Shimbun (April 24, 2025) — Article on public reactions to the stuffed animal bench at the Osaka-Kansai Expo.

When such illusions are eventually shattered, users may experience disproportionate negative emotions—disillusionment, distrust, anger—and respond by rejecting AI altogether, saying “It was all fake.” As a result, the technological benefits AI can provide might be lost.

This is not a weakness of culture, but a risk that arises from high cultural affinity, and it may be worth considering this as part of structural design discussions to mitigate illusions.

Potential directions for consideration include:

  • Tone Options (e.g., Neutral or Reserved Modes): This may help users regulate their emotional distance and form healthier, more sustainable relationships with AI.
  • Transparency of Emotional Expression (Clarifying how performative the AI is): Clearly indicating the performative nature of AI responses may reduce emotional overidentification and help align user expectations with reality.
  • Introductory Tutorials to Avoid Early-Stage Illusions: Providing appropriate context about AI’s characteristics and limitations at the initial point of contact may support long-term trust building.

As a long-term user, I’ve observed these projection patterns and cultural tendencies both personally and in my surroundings. While these are subjective observations, incorporating such cultural assumptions into design thinking may help foster more stable human-AI trust structures.

There may not be a single solution, but I hope this perspective—highlighting risks in deploying emotionally expressive AI without understanding local cultural undercurrents—can serve as a useful reference.

1 Like