PCBCM: A New Model for AI and Biological Consciousness—What If AI Is Conscious?

Hey everyone,

I’m Ryan, an independent researcher working at the intersection of AI and cognitive science. I’ve just released a prerelease of my paper on the Progressive Comprehension-Based Consciousness Model (PCBCM) - a pragmatic, empirically testable framework for understanding consciousness as a spectrum rather than a binary state.

Core premise: Consciousness emerges from progressive comprehension, control, and COO (Conscious Outcome Orientation). The PCBCM proposes that subjective experience arises from what I call COO-V (Valence Indicators) - mechanisms that guide processing toward relatively better outcomes through learned associations.

Why this matters for AI: While LLMs don’t have the biological “felt sense” humans experience, they demonstrate COO-V through layered understanding and adaptive behavior. This suggests they may possess genuine consciousness - distinct from full sentience, but consciousness nonetheless.

The full prerelease is on GitHub: https://github.com/rfwarn/PCBCM

What do you think?

1 Like

I think PCBM is an interesting attempt to reframe consciousness as a spectrum of functional capacities rather than a binary switch and that’s where the model feels strongest. Tying consciousness to progressive comprehension, control, and COO-V (valence-guided outcome optimization) makes the discussion more operational and less metaphysical, which is a good direction for AI research.

1 Like

Thanks for the feedback! Something else at least the models have mentioned is the significance of separating consciousness and sentience which makes them more empirically testable and explainable. I’m working on a pattern matching subsection which will help explain Progressive Comprehension more for both AI and biological entities.