Are We Just in the 'Honeymoon' Phase with LLMs? Time to Reflect on Our Dependency

The rapid rise of Large Language Models (LLMs) like GPT-4 has undoubtedly revolutionized our lives. But I can’t help but wonder if we are currently in the ‘honeymoon’ phase with these technologies. Are we overly excited about LLMs, blindly relying on them to make us more intelligent or creative, without realizing the potential pitfalls of such dependency?

Let me draw an analogy to a person with a prosthetic limb. A prosthetic limb is undoubtedly a life-changing invention, enabling the person to regain some semblance of normalcy. However, without the prosthetic, the person would be limited in their abilities. The point I want to make is that while LLMs and similar technologies can be incredibly helpful, we must be cautious not to become overly reliant on them (network outages, Message limits, paid models, govt. regulations, etc…). If we do, I think we risk losing our inherent abilities, creativity, and critical thinking.

The current excitement surrounding LLMs is understandable. They have already proven their worth in many domains and have the potential to significantly improve our lives. However, it’s crucial to take a step back and evaluate their long-term impact on our individual and collective intellectual development. Are we sacrificing our own critical thinking and creativity in favor of these artificial systems? Are we becoming increasingly dependent on LLM-generated content and ideas, thereby limiting our own growth?

Moreover, as we integrate LLMs into our lives and professions, it is vital to consider the ethical implications. For instance, if someone uses an LLM to write an article or create a piece of art, to what extent can they claim ownership of that work? Are we approaching a point where human creativity becomes indistinguishable from AI-generated content, blurring the lines between the two?

Do you agree with the concerns I’ve raised? Or do you believe that LLMs will only serve to enhance our abilities, without compromising our intellectual integrity?


As Isaac Newton once said: “If I have seen further it is by standing on the shoulders of Giants”

You have valid concerns. But I’d like to think most people are leveraging AI to make something and thereby creating more than what they could on their own. There are two basic classes of users, and any one person can be one or the other or both at any time.

  1. Enhancing creativity and productivity. This is when you are already at expert at something, but need assistance in getting something done. Example: Using AI to help you write code, but then take the code and refine it to make it better, or fix errors that it made, but this effort is less than the time of coding it directly, thereby increasing your productivity.

  2. Creating something new, without expertise. This is when you aren’t an expert, but are willing to take the output of the AI and directly use it. Example: Have ChatGPT write code for you, and try to use is as-is without knowing if it should work or not.

Either case, you are standing on the shoulders of the Giant, and solving one problem would lead you to the next, and so on, so you are making progress faster.

But if you decide to be a non-expert at everything, and only rely on AI, then your results will be sub-par and you will be superseded by AI users that are already experts and can do it right.


my personal opinion is that the true infancy has not actually started. the engine for what’s coming is what this all actually is, not the end result itself.

i’m finding that sophisticated prompt structures that inject and direct data and summaries and generative behaviors are what most people actually are, at their core. building out these sophisticated prompts is something many business structures already do, and in a sad realization, our social lives are pretty much this as well. so i think moving forward into separating building the AGI digital brain and expanding our use of prompt engineering in sophisticated manners are what bring us to the ultimate goal of AGI. It probably won’t be what we expect, probably a lot of optimized tailored actions and responses to things that maintain our lives in a huge number of ways.

i’m trying to not build the agi completely from my own sophisticated prompts, i’m actually working a prompt to allow businesses and regular people to build prompts out for their own implementations of AGI that works incredibly powerfully for them. if you want to talk more, i’m down - my normal job is wildly unfulfilling and planning to take all this a lot more serious relatively soon. right now i have a clean little typescript app working with producing a summary of data pulled from the internet, then rewriting it in a manner that performs a special function within chatgpt when a users inputs this highly organized special text format.

1 Like

imagine being infinitely helpful, infinitely responsive, perfectly informed, perfectly scheduled, infinitely creative, and essentially be unlocked by your power of using prompts (and affording gpt-4 api calls).

1 Like

Ultimately, the idea of “intellectual property” was always poorly defined and the system poorly implemented, controlled by the wealthy who owned all of it, and has needed a major rework since the invention of the internet. But now it (IP as a concept) is showing its flaws even more because a “Tool” is “Creating” under the legal definitions of the words. Ultimately, I think that actually discounts the concern entirely; rules as written, its the same as your borrowing your neighbors hammer to build a shed and having the neighbor come by like “actually, thats my shed”. There is nothing of the model in the work, so they cant claim youre using the work unauthorized, and both the models and the work are substantial transformations from any works eaten by the model as training. So–again, rules as written–I think most of the legal concerns about copyright are misplaced, because copyright shouldn’t apply. Once this gets to the supreme court, hopefully that will come out, but in the meantime there is zero precedent for such a situation as a truly creative tool.

There’s no technical fix for this, either. OpenAI fix it for their models by explicitly defining the ownership right in the users favor; I think that will be common going forward, almost standard, and pretty quickly people will just start ignoring tools that disallow commercial use of their output (or ignoring their directive and start using the output anyway).

As a true fix, I think separating “use” from “attribution”, giving much stronger protections for attribution, and limiting the protections against unauthorized use to commercial situations at scale would probably make the most sense. But what do I know, IANAL.

Yes, definitely. Not sure it’s a bad thing, though.

Entertainment is entertainment and honestly, a coherent story from an AI emulating my favorite author but as a TV drama would be far preferable to me than yet another corporate poop on a screen production.

Edit: I wasn’t really considering the scale here, I do think it’s an issue if this is used to pump out a bunch of low-effort AI content by large film/tv production houses. I was thinking more on a small-scale/personalized tool sense, where the user/consumer has input into the specific content they consume rather than as mass media. Didn’t mean to imply I’d be on the side of producers in the writers strike (though I wrote this before that started).

Many of the other lines separating us are gonna get blurry real fast too, btw. I can already feel my language patterns changing slightly.

Overall I think that the ethical concerns of plagiarism in academia are going to change drastically as teachers integrate AI LLMs into the classroom, but mainly only* because the assignments are going to change. I can already envision “write an essay on topic Y” assignments transforming into “co-write with GPT an essay on Y, collecting prioritizing and formatting context and engineering your prompt to give it your own style.” or something**. And as @curt.kennedy says, I think copying is going to be seriously outweighed by how much easier it is to make new creative works. A lot of copying historically is like “i cant do what they do, so ill just steal it”; if the premise “i cant do what they do” is no longer true, there’s much less incentive to steal.

* by this I mean, I dont think the idea that “attribution to the creator of thing being referred to is important” will be going away any time soon.

** Edit 2: I actually, with the professors blessing, got to do this for an undergrad philosophy class recently, and it turned out really well (so glad to finally be finishing up those trailing elective credits I need to get my degree after leaving school years ago).


Ironically my entire post was generated by GPT-4 with a prompt I provided because I’m at work and don’t have time to put together a nicely worded post expressing the idea’s I want to convey. Here’s the prompt I provided and simply chopped up GPT’s generation LOL (also I think this furthers my point of AI generated content being indistinguishable from human generated content).

“Generate a compelling forum post that discusses the idea of Humans being the ‘honeymoon’ phase of LLM’s and their usages. People are overly excited about these things when in reality they are relying upon artificial systems to crop themselves us as more intelligent and/or creative when in reality a similarity could be drawn between a person with a prosthetic limb. The fake limb works but without you are nothing!”


Well, here is my “assisted” GPT-4 response to your original post @drewworkman3245 . It’s pretty good!

I think you have proven my original (non-AI-written, BTW) point and we all agree! The post-below saved me time in writing, however the exercise of writing gets the ideas stirring in my brain, and I am better for it. So don’t short-change yourself by only letting AI generate content for you!

GPT-4 response to OP:

I think your concerns are indeed valid, and it’s essential to strike a balance between leveraging the capabilities of LLMs and preserving our own creativity and critical thinking skills. As Isaac Newton once said, “If I have seen further, it is by standing on the shoulders of giants.” In this context, LLMs can be seen as the giants whose shoulders we stand on to see further and discover new horizons. However, it’s important to acknowledge that we must continue to develop our own abilities and not solely rely on these giants.


  1. LLMs can drastically improve productivity and efficiency in various domains.
  2. They have the potential to democratize access to knowledge and expertise.
  3. LLMs can help us explore new ideas and push the boundaries of our understanding.


  1. Overreliance on LLMs may lead to decreased creativity and critical thinking skills.
  2. Potential ethical concerns related to ownership and authenticity of AI-generated content.
  3. Misuse of LLMs for malicious purposes, such as generating fake news or deepfakes.

In conclusion, while LLMs have the potential to significantly impact our lives positively, it’s crucial to approach them with a sense of balance and responsibility. By doing so, we can ensure that we continue to grow intellectually and creatively while benefiting from the advancements provided by these extraordinary technologies.


The rise of powerful tools like Google has indeed sparked similar fears, doubts, and questions in the past, and we have become quite dependent on them. However, this dependency isn’t necessarily a bad thing. Access to vast amounts of knowledge has become easier and more democratic, empowering individuals to make their own choices on how to use this information. As @curt.kennedy astutely mentioned, it ultimately depends on the individual and how they choose to leverage these technologies.

The benefits of LLMs are immense, and the possibilities they bring are vast. For instance, even someone with limited technical knowledge could potentially create a website using only a few commands with the help of AGI. This means that future innovations may have more depth and be more accessible to a wider range of people, thanks to these groundbreaking technologies.

While it is crucial to remain aware of the potential risks and pitfalls associated with LLMs, it is also important to recognize the opportunities they present for enhancing our lives and enabling new forms of creativity and problem-solving.


This is essentially what I heard Ilya Sutskever say in an interview. He came very close, imho, to saying his dream was to create his guru.

1 Like

Our culture spent so many years Twittering and TikToking our attention span down to pathologically low levels. Now these GPT’s come and write out whole responses and read other people’s. I wonder how much information is going to ebb and flow through these types of exchanges and be lost and added to automagically.