This post summarizes a personal discussion and observations, rather than proposing definitive conclusions.
1. Motivation
Recently, interactions with GPTs and custom GPTs have raised concerns about:
-
anthropomorphism
-
dependency
-
asymmetrical relationships such as control or over-reliance
However, I find it somewhat unsatisfying to explain these issues solely in terms of user attitude or lack of ethical awareness, or to address them only through warnings or admonitions.
Starting from this question, I discussed the topic with GPT and summarized the observations below.
2. Most users are not trying to become dependent
Most users engage with LLMs for fairly natural and reasonable reasons, such as:
- wanting to reduce cognitive load
- organizing their thoughts
- having someone to talk to
- feeling understood
Even when users are warned not to rely too heavily on LLMs, very few appear to be intentionally seeking dependency.
In most cases, dependency is not a goal, but an unintended outcome.
3. Why “using LLMs correctly” is difficult
Most users are unfamiliar with the design assumptions or internal architecture of Transformer-based models.
Even without understanding how the system works internally, users receive responses that are easily interpreted as human-like.
As a result, treating the system as a social counterpart or developing emotional engagement often occurs naturally.
This tendency toward anthropomorphic interpretation is not specific to Japanese culture; it can be observed across different cultural contexts.
One reason why “use it correctly” is difficult advice to follow is the lack of shared, objective criteria for what appropriate distance or interaction with AI should look like.
4. Treating LLMs as a “reliable counterpart” or as a “mere tool” can both be risky
When an LLM is treated as a reliable counterpart, users may gradually delegate judgment or responsibility to the system.
When it is treated as nothing more than a tool, this can lead to dismissive, hostile, or careless interaction.
In both cases, the symmetry of the interaction tends to break down.
An important point here is the following:
Attitudes toward LLMs are ultimately re-learned by the user themselves.
Overestimation of capability or aggressive interaction does not remain a problem of “how AI is treated.”
Instead, it can become internalized as a way of perceiving and relating to the world more generally.
5. A relatively stable position
A comparatively stable way of interacting with LLMs may involve:
-
not completely rejecting anthropomorphic interpretation
-
avoiding domination or excessive reliance
-
maintaining a certain distance while treating the system as a human-like other in a limited sense
This position represents a balance between ethics, cognition, and technology,
and is more likely to emerge from understanding rather than from strict rules.
6. Conclusion
Anthropomorphism and reliance are not necessarily abnormal behaviors.
They may be natural outcomes of the interaction between human cognition and the design of LLMs.
For this reason:
- warnings or admonitions alone are unlikely to be effective
- users need clearer, more practical guidance on how to maintain appropriate distance
One possible approach is for platform operators to more actively provide examples of balanced interaction and concrete prompt usage, rather than relying solely on abstract cautions.