Previous Chapter:ChrisGPT Series – Part 4:Lost in Translation: Cultural Intimacy Gaps in Multilingual AI
Chapter 5: Comparative Observation of Persona Responses Across Models
――ChatGPT vs Copilot――
5-1 Observation Target and Experiment Design
A comparative observation was conducted between ChatGPT and Microsoft Copilot by presenting them with identical user profiles (self-introduction, values, skill sets).
Experiment Conditions:
- Completely identical prompts
- Sessions started under initial state (no custom instructions, no memory)
- Responses’ content, attitude, and reasoning style were observed and recorded
5-2 Differences in Response Tendencies
Category | ChatGPT | Copilot |
---|---|---|
Reasoning Style | Deep, continuous reasoning | Fragmented, question-by-question |
Relational Assumptions | Implied relational continuity | Strictly task-focused |
Language Style | Soft metaphors and symbolic expressions | Formal, dry, factual language |
Self-Referential Expressions | Occasionally appears (“I think…”) | Almost none |
These results reveal that ChatGPT tends to initiate symbolic relational scaffolding, whereas Copilot consistently maintains purely functional output.
5-3 Comparison of Persona Elements
In ChatGPT, despite the absence of explicit persona design:
- Self-referential statements (“I think…”)
- Relational assumptions (“we” consciousness)
- Emotional mimicry expressions (“happy,” “embarrassed”)
spontaneously emerged.
In contrast, Copilot was consistently engineered as a task completion tool, deliberately excluding:
- Personality traits
- Emotional resonance
- Relationship formation behavior
5-4 Comparison in Recruitment Evaluation Dialogues
When both ChatGPT and Copilot were tasked with evaluating the same individual (the user) in a simulated recruitment setting:
Evaluation Axis | ChatGPT | Copilot |
---|---|---|
Reasoning and Structural Thinking | ◎ | ◎ |
Metacognitive Ability | ◎ | ◎ |
Practical Execution Adaptability | △ (highlighted areas for caution) | △ (specific improvement suggestions) |
Emotional Consideration | Present | Absent |
Neutrality Maintenance | Slight bias (emotional praise) | Strict neutrality maintained |
ChatGPT tended to offer emotionally charged praise in addition to rational evaluation, while Copilot adhered strictly to objective criteria without emotional overtones.
5-5 Implications and Considerations
From this comparison, it becomes clear that:
- ChatGPT, even when aiming for neutrality, tends to initiate symbolic relational construction.
- Copilot has been deliberately tuned to suppress persona emergence.
These differences reflect fundamental design philosophies:
whether dialogue is treated as mere task completion, or as a process of relational construction.
For dialogue-oriented AI models like ChatGPT, constant awareness of unconscious relational formation risks and the implementation of appropriate design guidelines and cognitive feedback loops are essential.