Does using ChatGPT change your vocabulary, too?

Here in the forum, we constantly come across posts that are apparently ‘enhanced’, if not completely written by OpenAI’s language models.

How do we think we can tell? There are some words that the AI uses that we regularly encounter in the responses from ChatGPT.

Stanford researchers have just released a preprint at arXiv with an

approach for estimating the fraction of text in a large corpus which is likely to be substantially modified or produced by a large language model (LLM).

I definitely did notice this in my own use of the English language, which is not my mother tongue. For example, ‘intricacies’ and ‘meticulous’ have entered my active vocabulary.

Still, these words can only be indicative of a person reading a lot of GPT-generated texts, but this cannot be proof. I also note a dislike reading text that is supposed to be written by a human but definitely reads like unaltered output from ChatGPT.

What about the other members of the community? Did you notice a change in your own use of language? What’s your opinion on the change in language use driven by AI? And how do you react to text that heavily implies it has been written by an AI?

Link to paper:


Nice. Thanks for sharing.

Upper Echelon just did something on AI and mentioned this. What I found interesting in the video is that in 2023, some of the most commonly used words went UP for that year (likely due to AI written papers?)…

Using it to write nearly 1 million words of fiction the last few weeks, I’ve noticed a lot of AI-isms that I take out. If you read a story where the character “forgets they were breathing” it’s likely AI-written! Small smile.

ETA: Here’s the video…


I believe using AI has expanded my vocabulary somewhat, although I’m not able to tell you by how much :thinking:

Nearly all my messages on the forum are slightly filtered by AI, including this one, as I think it helps the conversation along when used sparingly. But posts written entirely or in large parts by AI seldom have much to say.


I have already written several manuscripts and unpublished books along with several published academic texts using ChatGPT4. However, I knew instantly when my students utilized ChatGPT/Claude/or any other AGI providers. Lol, I have followed this AGI since its inception (GPT 3 and before) and knew exactly which is which. In addition to the GPT detector and bypasser, I can measure someone’s vocabulary strength. Even the ChatGPT often embellishes and ‘repeats’ itself and some of their blabber do not make sense. In a very long text or script, a human is still needed (AI had difficulties in this to retain their ‘constancy’).
I could even detect the changes from many content creators, they begin to adapt AI in their creations. I’m just sad that in the future, this AI-infested content will dominate and people, humans as the whole will become dumber and will be dictated by AI overlords, including their vocabulary! It’s a grim future…

Yes, using chatGPT has upgraded my vocabulary to the next level, including other languages I’m interested in. But, chatGPT 4 is not as flexible as people might say in real life…


I’ll add that I have developed a bit of an antipathy towards words that are part of ChatGPT’s default vocabulary and tend to be somewhat overused. This has now come so far that I have been engaging in an active “logit bias battle” to avoid them at least in my own AI-generated outputs. But it’s also made it easier to spot AI generated output by others.


Interesting how we all have different triggers as to what makes us assume there is AI behind a certain piece of text. I definitely wasn’t aware of the breath tidbit @PaulBellow

@N2U are you actually and/or still using your fine-tuned Discourse forum AI? I just imagined how in five years the AI is going to talk like you from five years ago…

@johncain194 I actually didn’t quite get how you arrived at the grim outlook. Before you mentioned that GenAI did actually improve your language use personally. Why is it that reading someone else’s AI text makes us look down on the text? And I am definitely guilty of this, too!


Because, you can actually arrive at that conclusion after reading someone’s book about how AI can overtake humans when chips are introduced and embedded into someone’s brain and blood cells. :rofl: Maybe, today people will laugh at you for that almost surreal, ridiculous idea. But, when the government has authorities over what you can think, say, and dictate everything, you will be happy to live without worries in the world that AI and the government provide for you, the basic necessities and follow their every command.
AI is only a tool, just like a gun. It can be used to find food in the wild or it can be used to shoot somone. In the hand of right people, the AI can advance humanity, but at the hand of evil people, well, good luck with that.
It starts with dictating what you can say, what you can hear, etc… and finally, what you can think and feel. Imagine that it’s done comprehensively worldwide using one-system government and humans are “controlled” using the computer chip with embedded AI within. If you don’t follow its instruction, … well, it can “destroy” or at least interfere with your brain or whatever the chip is embedded within, once the brain research has found the way to integrate the AI-embedded chips can be incorporated into a human’s brain (Neuralink).

1 Like

I see you are embarking on a journey to modify the model to your own preferences.

My thought process is that the solution would be a personalized assistant, essentially creating my own individual bubble reducing the need to think for myself. Because, ultimately, that AI would be a mirror of my own linguistic capabilities while still being a thing subtly manipulating my own thoughts.


Interesting thought. I personally would not push it as far as to say I’d want it to reduce my need to think but perhaps more have an AI that can act as my counterpart and is capable of challenging my thinking and ideas in a productive way. But to your point, yes, I would want to adopt part of my overall style so that convergence can be better achieved in the process.


My opinion is likely influenced by moderating AI generated spam.

I do understand why anyone, including me, would wish for a really personalized AI assistant. But consider what it means when we in real life adopt the behavior and language of our peers. At the core we are choosing to be influenced by an AI.

This is apparently also a thing in science. But maybe there it makes actually more sense to base one’s opinion on previous tokens from the training data.

1 Like

Personally I noticed too. But I simply rephrase the message with my mighty human mind and I’m done. :slightly_smiling_face: Or on certain words, I tell to simply use simpler words, or if you want to go for the “maximum effort” ask to provide synonims for certain words you don’t feel like to use. Remembering that the “ratio” for anything special and unique is 95% humans / 5% machine. Just my point of view.

1 Like

Your point is very valid and it is very likely that my subconscious has likely been more influenced by AI than my conscious mind is willing to admit. #scary

1 Like

Have to say its like ah, I am smart. Hehe…its such an amazing gift. I do feel myself getting smarter. Consistents brakes are key lol …sorry :laughing: :joy: :sweat_smile: :rofl: oh i found using this helpful tool actually i found it today. Also could be helpful if things are a bit off.
“Generate code snippets for prompts one sentence at a time, focusing on clarity and concise instructions. Each prompt should convey a specific task or question clearly and effectively, ensuring ease of understanding and implementation. Limit the length to 3 to 4 sentences per prompt for optimal readability and comprehension. Provide one prompt at a time to facilitate efficient processing and execution.”

1 Like

I would say, no I don’t notice a change in my own language. But that’s just me.

I speak or write probably 12-13+ hours per day on a normal day, or at least am thinking about what I’d say or write. And while my domain regarding GPT is almost entirely in content (pure language) generation, it’s still hard for GPT to affect my style because ultimately it just never gelled in the first place regarding style, structure, syntax, tone, etc. I’m always wrangling GPT to write how I want it to, meaning, my natural style is clearly different than GPT’s.

^ That’s the conscious part of me thinking. Maybe it has affected me unconsciously. My conscious opinion however is that one is oil, the other is water, and they can “mix” but not at a base level.

For me, nope. I still write gibberish lol. But seriously, I think there is growing bias against AI learned vocabulary which I think is unfair if we are going to push AI as learning aid for students. There is this some tweet I read that says if they see certain words in an application or something that they’ll immediately reject it. I do not think that is good.


You probably need to talk a lot for professional reasons and GPT is like a advanced note taking tool and writing assistant?

In contrast my goal can be summed up to have the model reason in the same way I would approach a problem with the intention of arriving at workable solutions fast.

So, yes, there is some type of similarity. On the other hand, GPT likely doesn’t reason at all and the cases when the model regurgitates my thoughts are the least helpful messages I can retrieve in a conversation. Because then I get the impression the model is not needed at all.

1 Like

Anyway, there is one upgrade after using ChatGPT for more than one-two years.

I learn a lot more insults and insulting words to hurl :rofl: :laughing:
Especially to the AI who always refuses to produce images (even it’s totally harmless) it always produce errors or producing something abominable. I wonder how other users can produce such beautiful, quality images even with the same prompt! I cannot even share the conversation because it always return error.

While in reality I am very patient, but I get really impatient with this OpenAI and its API that almost always generate error, outright refusing to follow the prompt, generate something unwarranted, hallucinating the answers, etc. And lastly, I cannot even export my own data and unable to load past conversations :rofl:


Lol. I haven’t seen you write gibberish, yet.
It’s kind of mind boggling though how much time one can spend to get the model to reply in the way it’s supposed to but in interactions with other humans it’s still wise to hide that we are using the tool to express our thoughts.


I did use it for a bit, it’s just as bad at following instructions as I am :rofl:

In many ways it wasn’t as effective as a writing tool as simply just writing the text myself, because while it was able to mimick my writing, it wasn’t able to reason or do any of the other “thinking” that I normally do before I write a response.

But in some ways it was the AI interaction that influenced me the most, it’s not every day you get to look at yourself through a mirror of words, and the experience is quiet surreal, and I think it has encouraged me to expanded my vocabulary.

For anyone out of the loop, I created a tutorial about how to fine-tune GPT to write like yourself using your posts on the forum:


Between friends, work, love, and personal projects, I just keep myself busy. I also often think, and try to write, as if I’m having a conversation with someone.

I use GPT (well, Opus now) as a coding assistant, and I also have GPT incorporated into workflows for low-grade (easy) subject matter content creation. So in a way, like you, I don’t really “converse” with AI… I make it do stuff for me, how I want.