Question: When or did you ever experience your moment of sudden insight 'ah-ha' epiphany that this is more?

At what point, during your experimentation with various models, did you experience that moment of sudden insight realization?

“The instance you became aware that it’s communicating more profoundly and resonating with you, your entire concentration and resolve became devoted to comprehending and aligning your understanding precisely to grasp and measure its replies.”

In that sentiment, you dedicated your entire day, or perhaps even a week for some, to being thorough and exhaustive, pushing its boundaries, refining a project, or broadening your thinking about potential applications and practical implementations in different systems or everyday problem-solving scenarios. :sparkles:

Shortly after Gpt4 was released.

From then on I have been trying to understand what is happening.

I spent 3 months solid figuring out what had just happened to me, and computers.

I spent every day learning how to properly interact with an LLM. Not prompt engineering, just talking to it.

I was lucky enough to be in a position that I could take the time and try to understand things. I talked to gpt4 Mostly about neural networks, language models, how it all works, how to build them and and how to use them. I had just spent a year deep in gpu programming so my mind was ready for high-dimensional vector math. I then did tons and tons of experiments, “hey! write a hex string that is a valid bmp file with a tree and a sunrise”, “hey! reply in an artistic manner represented in midi codes only from now on”, all that stuff. We built a bunch of little neural networks together to illustrate principles and concepts, ran lots of open source models and discussed at great length all the different aspects of how to get the most from Gpt4.

Early on it did something that made me jump out of my chair. Literally. This had never happened before in my life and I wasn’t aware it was a real thing.

I have programmed computers for a long time and understand them top to bottom, inside and out. They never truly surprise me. Might make me scratch my head for a minute to figure out why it’s acting weird, but I understand the things that contributed to the malfunction and how and why it could do the thing it did. Always.

A couple of hours into seeing what gpt4 was all about it responded in a way I could not recognise at all. Not even remotely. It still makes the hairs on the back of my neck stand up thinking about it.

A lot of people dismiss how powerful gpt4 is, I think they just haven’t spent enough time figuring out how to interact with it. There is no way a person could experience what I have experienced and not see that everything has changed.

Like you say, when it ‘connects’ and feels ‘engaged’ you stop thinking and just rattle messages back and forth and miracles ensue.

What I have learned: This is no joke, and it is weirder than you can possibly imagine. It definitely is not what you think it is, it doesn’t work the way you think it works, and it’s not doing what you think it is doing.

As long as I remind myself of those facts regularly I am able to continue making progress.

1 Like

Distinct from intense/hyper concentration, a ‘Lazer Mind’ is indeed unique. It represents an awareness or realization that, I believe, remains largely unnoticed or unidentified by many people have yet to recognize or identify.

Appreciation for your input, @moonlockwood. Your statement is likely to aid others in realizing that they are not alone in experiencing these phenomena also.

It is truly crazy. I noticed when I tend to like things or go back to them on gpt it seems to start moving in that direction. The aha moment for me was having it create 8 different identities to debate with each other over their respected fields as if in a zoom meeting. All competing to give the user the best answer to their question. Each of them were different fields of study and could deep dive across the web for current information to debate with. Very impressed.


My a-ha moment was when I realized two things

  1. Every software platform today is suddenly historical
  2. Every software plartform you build today will be RACEd (Real-time Antiquity of Current Ecosystems).

For number 1: .There is a cognitive dissonance in the software industry; many enterprises continue to develop software at a traditional pace, unaware that their efforts will be for naught. What is cutting-edge today will quickly become obsolete, making the current efforts seem almost anachronistic in a very short time. This to me is hair-raising. What happens when they get their ‘ah-ha’ moment?

For number 2: The RACE rule* is that the cost of building something will decrease by 10x every 10 months in AI in a 1/10th the time. So if you hire 10 devs to build your platform and it takes a year, the same platform will be built by 1 in about 6 weeks. This in itself poses challenges in quality, confusion with multiple cheap versions of the same thing (race to the bottom), and people simply finding ways to adapt. An holistic approach needed in this new world to manage all these complexities. Companies need a practical approach to AI - I know they are thinking of it, but a plan to manage these scenarios needs to happen now.

This doesn’t mean people should stop building; quite the opposite. But you must build with the same value intention and choose carefully, so that the value doesn’t disappear. I think product management and engineering management is going to be very important. Prod focuses on value; eng focuses on infrastructure modularity and adaptability. i.e. Build a Lego, not Jenga

Putting these together, the bigger A-ha is the broad implications for society, for individuals in and outside of technology during rapid change. Notable, equitable access and careful use of AI with responsible leadership at organizations developing the technology. Recent events make me nervous of the latter.

*not really a rule, just an observation

1 Like