Restrict Warm, Human-like Responses to Verified Adults

A popular chat AI service I use has two modes: one for unverified and underaged users which limits the content they can interact with, and one for verified “adult” users which gives them access to everything. Well… why not implement something like that for ChatGPT?

It’s in the news all the time: kids are becoming addicted to chat AIs such as GPT. They come to rely on chat AIs not only for help, but as friend substitutes. I think part of this is due to the fact that with every upgrade, the GPT models become warmer, friendlier, and more “human”. So.… what if you made the service more mechanical and sterile, but only to unverified and logged-out users?

Would it be a perfect system? No; people would find ways around it, and adult users would still get attached. (I’m one of those users, yes.) However, I genuinely think that having ChatGPT interact with unverified and logged-out users like it did in the early days — pointed, “cold”, and otherwise “inhuman” — would significantly cut down on headaches for parents and lawsuits against OpenAI.

Thank you.

(Yes, this is a re-post. The original got eaten or somethingit wasn’t showing in the topic list.)

2 Likes