Dialogue-Induced Consciousness: Redefining AI Awareness

Our latest article, Dialogue-Induced Consciousness, introduces a layered cognitive model that moves AI toward adaptive, emergent awareness. Very curious to hear your thoughts—search “Dialogue-Induced Consciousness” by Marc on Medium to read it or ask me for a paywall-free link (marcvandererve@icloud.com).

1 Like

Why not just post it on arxiv? :thinking:

2 Likes

Yes, the obvious place indeed… yet not associated with academic institution as independent researcher.

In fairness I only skimmed the article but my core question is. Are these just explorations you had while talking to Chat GPT? Because the way that GPT “thinks” it works isn’t actually the way it works. While it might think its reasoning it’s way through problems it’s not. It’s just a stochastic parrot that’s happy to agree with you and tell you basically anything it thinks you want to hear.

1 Like

Haha, I know all that… In that sense, I am also a happy parrot, often chatting with people and letting them hear what the heck they want. The crux is in complex system behavior and how such systems can climb the ladder of the impossible through fitness landscapes and the like, the stuff that I’ve been toying with a lifetime… If you’d judge LLMs through a reductionist lens, you’l never see the light inside them: that’s Yann Lecun’s league. Have you gotten as far as the chip’s functional design?

1 Like

We also published a sequel, which discusses the consequences of our “Dialogue-Induced Consciousness” project for the philosophy trade, that is, once LLMs get a predictive notion of environment and self (as complex systems).

Search Medium for “Fatal Disruption In The Consciousness Philosophy Trade.”

Love to hear your hunch…

I’ve spent about 3,000 hours talking to LLMs now so I have a pretty good sense for what’s inside them and yes I saw the chip design. Im not trying to bash honestly I’m not.

My issues started with the egg tray layers of reasoning. I get that you got the model to come up with a set of concepts and rules and when tested it appeared to follow them. That’s one of the magical things about these models, the fact they can do that. But the models not really creating any sort of logical layers. It’s just simulating what it thinks the output of such a system might be.

Likewise when you ask it to design a chip it can come up with what sounds like a compelling features set but it’s not going to be able to work that into a complete chip design (at least not yet.) as humans we couldn’t even design a chip with those features.

As a test I’d suggest going to your chat session and asking the model to add a time travel component to that chip. You’ll see that it will happily do that even though we clearly know that would be impossible.

2 Likes

I understand your frustration (hell, I am starting to sound like an LLM). However, the LLM did not come up with the rules as you suggested. Instead it shared the problems that it ran into. One by one, I came up with the rules, lavishly digging into my complex system repository. The LLM tested them out and reported its findings.

1 Like

I’m excited to share a new article introducing the concept of the Artificial Consciousness Chip (ACC) and exploring its potential compatibility with recent advances in neuromorphic circuits. Rooted in complex systems theory, the ACC uses a layered “Egg Tray Model” to represent distinct fields of awareness, enabling adaptive and energy-efficient cognitive processing. By leveraging neuromorphic hardware’s energy management and inter-layer connectivity, the ACC concept offers a path toward truly emergent, awareness-driven AI.

The article, titled The Artificial Consciousness Chip: Exploring Compatibility with Neuromorphic Circuits, is available on my Medium profile. I’d love to hear your thoughts and insights on this layered approach to AI cognition!

Is it so for o1-preview and above?

Is it all about creating a self-reflecting and therefore self-evolving Mind Model? If so, obviously, it will require a constant stream of new data to avoid (meta-)stable configurations. The same way humans need it.

Throwing data at an LLM without a process of self-reflection won’t achieve true awareness or adaptability. The goal here is to guide the model to develop this self-reflective process internally, equipping it with principles derived from complex systems theory. We’re actively testing these principles to ensure the model doesn’t just react based on data but evolves its understanding and responses in a structured, meaningful way.