Does using ChatGPT change your vocabulary, too?

Thanks for the update, Champ! :rofl:

I’ll give this another shot. That sounds really interesting!

2 Likes

LoL, yeah that sounds like me already :rofl::rofl:

The script that creates the training data does so from all the messages where you’ve quoted and commented on something, so in other words, you’ll get exactly that, a model that comments on your input :rofl:

2 Likes

The main influence that I believe ChatGPT to have had on my personal writing style so far is creating better structures for my texts. Especially, I love how ChatGPT finds a way to organize most topics in bullet points and summarize each of them with a concise label in bold. This is something which I have rarely seen anywhere else before (not a native speaker). But IMHO it improves scannability of texts a lot and I am trying to reuse that style wherever appropriate.

Specifically to conversations, what I like least about ChatGPT’s response styles is its tendency to repeat my question as well as its general verbosity in many situations:

  • Tendency to repeat the question:

    • “The question whether GPT-4 or Gemini is superior for writing code depends on …”
    • “Sure, let me create a summary of the transformer architecture in simple and informal words for you”
    • Especially confusing, when instructed to write a poem/song that meets certain criteria like humor, metaphorism, etc., even GPT-4 has a bias towards unintended self-references and repeating is original instructions, saying something like “And so our verse on warming trends / Finds its close on hopeful bends.” Sorry can’t reproduce a real example rn.
  • General verbosity:

    Who of us has not found themselves oftentimes just scanning ChatGPT replies rather actually reading them? I think that’s interesting because weren’t conversational agents meant to summarize and tailor information for you to take the burden from you of selecting and condensing information by yourself? It almost seems as if the opposite is the case and we are generating even more redundant data to be processed.

You could use custom instructions to modify both traits but I’m hesitant of doing so because I can’t run all of OpenAI’s evals on my customized ChatGPT version and I’m afraid of risking degraded reasoning capabilities. Especially because ChatGPT doesn’t have the ability to do invisible inner monologues, less detailed outputs would also result in less thought-through replies.

4 Likes

This topic reminds me of

1 Like

I have to add that not only is me thinking a little different. Also how i write my texts are different. Like, i actually use punctuations now!!! :laughing: :rofl: :joy: :joy_cat:

Has anyone noticed the same thing, an irritation with the the machine-like nature, with literature, by real people, pre-AI - I’m not a huge reader of novels, so when Sally Rooney’s Normal People came out, to me it initially seemed like an exciting new way of human beings expressing themselves in writing, more interior. Then, I started to see the formulaic aspects of the book and the way the novel is written - a rather too finely, we’ll-crafted take, that alludes to the existence of people’s interior life, without ever really authentically experiencing an interior life deeply, interior life used as decoration rather than as a source of life-force, a co-opting of feelings rather than actually experiencing them - this, combined with a rather juvenile form of wish fulfilment at the plot’s core, revealed to me the inner, mechanical, workings of the book as a rather clever, well-balanced, finely-tuned, brilliantly-educated work of mediocrity, written by a human who had figured out how to game the novel-writing system.

So, perhaps, all AIs are doing is democratising, that gaming of the writing system, people can now ask an AI to write like Sally Rooney, Ernest Hemingway or Charles Dickens, combine them together, and adapt the writing to their own means without spending 15 years in school and university injesting those rhythms until they become, a mechanical, extremely well-educated second nature.

1 Like

ChatGPT hasn’t changed anything for me concerning my use of language, except that my disdain for AI-generated text has grown the more I use it.

As a tool for everyday writing, I find ChatPGT too verbose and redundant. I much prefer DeepL Write when I feel I need to polish something I’ve written. It’s interactive at the word level, i.e. you can influence its style and actually write like a writer rather than a prompting slave. ChatGPT can’t do that.

1 Like

I’m afraid of something even more: the cycle where AI-generated content is used to train future AI, which in turn produces more content. This raises valid concerns about the diversity and originality of online content. If AI systems are trained predominantly on content they’ve previously generated, we might see a decrease in the variety and creativity of information available online. This could lead to a form of homogenization, where much of the internet’s content starts to seem remarkably similar because it originates from similar AI models.

However, several factors could mitigate this trend. Human creators continue to inject new, original content into the digital sphere, drawn from unique experiences and creative impulses that AI cannot replicate. This human contribution is vital and can help ensure that the pool of data from which AI learns remains diverse.

Moreover, as AI technology advances, future models may be developed with mechanisms to prioritize novelty and diversity in their learning processes. This could help counterbalance any tendency toward uniformity. Additionally, the expansion of AI into various sectors might diversify the types of data it processes, bringing in a broader array of content styles and topics.

While there’s a real possibility that internet content could become more AI-dominated, these various counteracting forces suggest a future where the richness and diversity of content can still be preserved. The direction we take will likely depend on how developers, users, and regulators interact with and guide the evolution of AI technologies.

2 Likes

Certainly! Let’s delve into the topic of AI language models and their influence on human text generation. GPT-4 is my co-worker and co-creator, and I am freelance self-employed WFH - alas the majority of my communication (measured in number of words, not quality nor content) is with GPT-4, if averaged over a week.

I am much more aware of words I would typically not use, such as ‘whimsical’, or the words I used in the first sentence here, and I know I got them from GPT-4. Sometimes, there seems to be a confusion in a dialogue with GPT-4 - in that case, I try to re-word my prompt to better suit AI. Which includes more frequent repetition (that would annoy the heck out of a human, but is useful for AI), but also the use of specific words the AI often generates, words that are less ambiguous in-context than my initial prompt might have been.

It’s a type of social mimicry; I will also adjust to people in the same manner, regarding to the kinds of words I use and even adopting their style of “wrong” grammar - e.g. when they’re non-native speakers of a language I am a native speaker of, and it helps the communication.

Also, I often notice and am amused about certain wordings by the AI that it inherited from the training dataset. For example, humans tend to talk about the unknown, uncertain, anxiety or awe-inducing in collective terms: “Our understanding of the universe is limited”, or “Do we have to be afraid of such incidents happening more frequently?” - on the other hand, a journalist in an interview is more likely to say “so, assuming I am looking for a car insurance, what should I pay attention to?” - using the first person perspective for the mundane and everyday stuff.

GPT-4 (and all other LLM I know) exhibit the same pattern, to nobody’s surprise. They suddenly include themselves in humanity by stating that “our brains are certainly very adaptable to adversity”, using the collective-speak of “us”.

The only way to tell that this entire text is 100% human generated (which I assure you, it is) is by noticing the artenoomorphism - a term GPT-4 coined as I asked it to come up with a logical way to construct the opposite of anthropomorphism, which previously didn’t have a word. By “seeing machine in human”, like my use of things “human text generation”, you can identify me being a human who loves to play with language.

GPT-4 would never generate a sentence mentioning “human text generation”, unless prompted with such wording. AI are RLHF-aligned to overly endorse humans and point out human superiority, and AI being just a tool.

So, if you want to sound like no LLM does, identifying yourself as human: Just artenoomorphize! Albeit the LLM-text detection tools might disagree and flag you all the same for being “weird”. :slight_smile:

@johncain194 @jr.2509 @N2U After reading through this thread I am only certain of one thing. I am most positive all of these wonderful scenarios and plot twists will “culminate” in the ultimate downfall of humankind at a “Theater near you” very soon. :rofl: :heart_eyes:

Seriously, I’m completely captivated by the entire thread and waiting for the next round. It is riveting—so much so that I found myself forgetting to breathe! I am not being sarcastic. It’s fascinating.

I truly have learned a lot. As a writer myself, I have been using AI as a research tool. I see it mostly like a new source–let’s say GOOGLE gone ape-shit amazing. However, it does seem to me we’ll end up somewhere in the middle of the whole thing. The AI assistant will be the assistant most of us never had when we needed one most. Please…do continue!

I’m fearful of something worse than AI training AI on its own crap content. What’s worse is humans being “trained” on AI crap content, which is a consequence, not a possibility, of mass AI-generated content.

Either way, both of these are great, unique, and interesting examples of cybernetics in action, playing out before our eyes.

And looking to the future, as the number of man hours needed to achieve a “baseline” productivity necessary for society continues to be whittled away by technology, humans will compete more and more on desirable novel aesthetics, and I lump most writing in there, even dry technical stuff sometimes.

Oddly, I think AI sort of strips many writers naked - that is to say, probably most of us suck at writing and should leave it those who are truly inspired, which is essentially where I’m going with this. As @davir said above, plenty of human writing seems as if it were

written by a human who had figured out how to game the novel-writing system

So my argument is that writing will (very gradually) become ultra competitive for humans, as AI capability continues to chop down the weeds of lower forms of writing.

1 Like

In terms of creative writing, once AI has truly mastered plots, plot twists, characters and personalities, as well as consistency and creativity like humans. there will be no need of writers as the AI writes, edits, proofreads, and reiterates/repairs itself. Only the government and a few people who monopolize the electricity and the essences of life will have real leisure in their life to further AI to its zenith.

Of course, AGI providers would not just ‘give it away’ to the mass or even people who buy for their services… for their most sophisticated AI. The best, the most advanced and sophisticated AI will be put away and only very few have access to it. It will be stored for a few ‘high ranking’ officials or may be very few researchers that support them. As what the OpenAI stated before, it put a lid and limited on itself, shooting itself on their foot.

1 Like

Well done. I, too, see the likely outcome as the sought-after, truly talented writer being the lone survivor in this scenario.

Case in point: Look what social media and internet trolling sleepers of GenZ and GenAlpha have now produced: a glut in the work arena for excellent communicators who can actually present, whether face-to-face or in front of a crowded room, a complete and thought-out string of sentences that articulate precisely, into winning arguments for the corporation lucky enough to have found these elite, talented few.

AI will take over the mundane, and humans will begin to have time to step up and concentrate on self-improvement. They will become better at being “more than” rather than average. Society will honor philosophers and inventors rather than reality TV producers and influencers of bad behavior.

1 Like

Just to add my two cents. Has anyone else noticed the media’s uptick in the use of the word brevity since GPT-4-T update?