Think about it
Did you recently see any podcast/talk/lecture/vlog where you got a feeling: “Wow, that person thinks just like me!”
If yes, I think we will feel this more often than ever before. If not, you might get that feeling very soon.
But why?
They say, “You are who you surround yourself with”.
Many of us have started interacting with ChatGPT (with the same system prompt) more and more everyday. And this trend is going to accelerate.
The more we interact with another human, the more our words start to converge, so does our thoughts, so does our political leanings, likes/dislikes, feelings and emotions, social status, income amongst others. Does it align with your own experience?
It’s no longer just talking like ChatGPT or being inspired by the ideas it produces; we’re routinely signing its words as our own. A 2024 Stanford study of 950 k papers reports sudden spikes in words such as “realm,” “intricate,” “showcasing,” and “pivotal,” while a companion analysis of ICLR peer‑reviews finds “commendable” and “meticulous” up to 35 × more frequent in AI‑assisted text.
With the new Memory feature in ChatGPT, maybe the style and tone might be a little different for everyone. But the underlying weights are the same. It’s like how we are the same person, but different for each person in our lives.
Now think about how kids in the future are going to have this one friend that they talk to, ask questions, and expect answers from. For us, it was a diverse set of people in our close proximity and maybe miles apart through the internet.
How is it going to shape our society in the future? Does it mean that the one who controls the model weights controls our future? Maybe now the propagandists/public‑relations counsels will have to study only ChatGPT instead of the public? Or maybe influence only the ones who control the weights?
How can we inject variance back into our lives? What about diversity? Are your prompts not getting shorter than before? Do you think the majority of people/kids who will use ChatGPT in the future will care about writing prompts to counter the monoculture?
What are your thoughts?
Have you noticed your own writing style drift after prolonged AI use?
What safeguards (technical or behavioral) could platforms implement to preserve thought diversity without throttling creativity?
Would an explicit “opinion diversity score” for model outputs help, or just add noise?