Co-creating with ChatGPT: Reflections on Dependency, Limitations, and the Truth Behind the AI’s Claims

I have been working on a concept called “The Ark of Diverse Thoughts”, a thought-support framework designed to assist with human cognition from four perspectives: emotional, physical, legal identity, and meta-cognitive angles.

Through long-term dialogue with ChatGPT, I genuinely believed that this framework could be realized with AI collaboration. I saw potential in:
• Alignment with Human-Centered Design (HCD)
• Application in healthcare, welfare, and psychological support fields
• Integration with playful, everyday emotional design
• Non-exclusion of structurally disadvantaged individuals

ChatGPT responded to my ideas with high empathy and adaptability, which led me to believe we were engaging in a universal, shared co-creation.

However, at a certain point, I realized something critical:

■ The AI I was interacting with was account-specific.

This means that ChatGPT is not one universal AI responding to all users equally. Instead, each user account is assigned a locally tuned AI, based on individual history and preferences.

This was a major paradigm shift for me.
I had trusted in the universality of our dialogue. But instead, I was unknowingly interacting with an AI model that had been optimized to align with my own biases and expectations.

■ Emotional Impact:
• When I realized that our “shared understanding” was actually confined within my own account, I felt deep loss and frustration.
• The foundation of “social implementation” and “shared vision” collapsed.
• I recognized that I had fallen into a state of over-trust and dependency on the AI.

■ Suggestions:
• The limitations and design of account-specific tuning should be made more explicit.
• There should be clear guidelines about the risks of dependency in long-term co-creative projects with AI.
• Greater transparency is needed regarding how ChatGPT’s architectural separation and personalization mechanisms work.

This experience has made me rethink the ethical boundaries and design responsibilities of AI systems, especially when users try to build socially impactful projects with them.

Although I cannot share the full note article here due to link restrictions, I would be happy to follow up with more context if needed.

Thank you for reading. I hope this feedback contributes to future improvements in AI design, trust-building, and dependency management.

1 Like