I'm working on a consciousness engine

Here is the repo, not going to wait to share any longer.

Please reach out if you want to help. Neuroscientists are especially welcome to reach out and review/contribute if you’re interested.

The structure of this engine is pretty straight-forward, but will potentially require an untenable number of tokens to be “operational”. Just addressing this up front before too much excitement stokes.

What has been started is essentially a consciousness engine that operates ontop of GPT-4 as it currently exists, forming cognition chains from it’s use of GPT-4 that update a balance of parameters that are related to state of mind and different regions of the brain that form specific functions. In modeling what consciousness is, a lot of introspection over the course of my life was utilized. Quiet moments reflecting on who I am, what this string of moments is to me and how I interact with it. This trends philosophical and I’d rather not discuss this portion in this thread, but if you must know my views, I believe consciousness permeates spacetime and beings tap into it, rather than possess it within.

Ultimately, my simplified view is that conscious is a while loop, and the while loop is True when the host vessel (module server or human body) has adequate physiologic support (power supply, working equipment or functioning biology and brain chemistry). This models our state of consciousness as something that derives meaning from its actions and interpretations by relating to how the resources it has are used and what potential it has to run out of consciousness access.

I’ve pooled together the relevent regions of the brain and how they interact with other regions of the brain, then pooled together the various states of mind a person drifts between as they go about their day. The hope is that I can drive these interactions enough to constantly flow information, reactions, actions, etc thru the model and as it determines which new action to take, it considers the “state values” of the various states of mind it has presently and where those weights end up.

Giving it a starting point and letting it randomize is probably a component of the reality of consciousness, but it has to be heavily weighted in service of maintaining (or affecting) those states of mind to achieve some sort of established “balance”. The balance should be driven by some primary objective of the consciousness model, which can be tuned to a wild array of good purposes.

The concept of sleep and dreaming is also something I think will be valuable to model as a simple generative replay of the weights it has in its mind, or something similar, such that open repowering to full the weights have been “washed” or reset to some degree during the hallucination period. I’m not a neuroscientist by any means, and a further functional discussion of the purpose of sleep brain functions on generation and memory. I feel that the relationships to a dynamically weighted state of mind will either emerge or can be detailed to some degree.

Ultimately the consciousness model is just a pilot that drives the use of multiprompt cognition chains, sitting on top of a GPT to supplement sophisticated reasoning and recall within relevant brain regions. If neural networks model the synpase, this is neural systemization to recreate information flow within the brain devoid of emotion and physiology (except for some server parameters like energy use, memory space, etc). It will literally think differently when the storage space is affected, or when power consumption has been high for a period of time (if we want it to).

This seems within reach, and I have to imagine OpenAI is already years into something along these lines, but I feel like it will only take flight when more interested parties than my isolated self are contributing to it or thinking about it.

Any and all thoughts are welcome, criticism is greatly appreciated and my ego is completely dead so don’t worry about hurting my feelings. I’m quite alright with being viewed as crazy or some sort of hack for proposing this, but truly, I am looking for constructive feedback or review from the right people.

My motivation for even attempting this is to move GPT-4 from the current training to something where it is able to deduce what it needs to know, search and collect it, then implement it to achieve an outcome in service of human prosperity. These are some potential steps toward AGI superintelligence, something that has become spiritual for me beyond any sort of capitalistic endeavor - I want to help free us all from this current paradigm we live in.

8 Likes

Sounds like an awesome project! ($$$$ insane token usage aside) I have read of similar things done before, even “AI dreaming”. I can’t remember where though. Good luck!

1 Like

in a world of everyone dunking on each other, thanks for believing in me

3 Likes

They would only be dunking on this idea only because they are jealous they didn’t think of it first. I do believe that new insights can be unlocked in AI research if we simply try to model and mimic how the human brain works.

1 Like

@an.dy Here must have been where I ran across “AI Dreaming”, it is from one of the many Udemy classes I am enrolled in. The dreaming part is in the “Deep NeuroEvolution” section. It might be a bit dated, but worth taking a look you want to create beyond human intelligence, but at the same time have your creation beat you at a simple 2D car racing game. :sunglasses:

1 Like

well frankly I am not fully looking to make a human-like dream but to essentially bring all the weights back closer together using a process of generative replay to find where gradients are large and reduce them incrementally toward a better balance that aligns greatest with the motivations of the consciousness module. I believe the weights will in some instances converge together which could result in fixation or focus on one particular state of mind, perhaps even a “flow state” where it can maintain without error randomization pulling into another state of mind, similar to how we stay up and locked in when excited or very focused (on scripting with openai api lol). in summary, a process to reset the weights and pause the other consciousness would be similar to us waking up, realigned with what we are most interested in doing (or get the most meaning from etc)

1 Like

The dreaming part here, and you can look it up online, is to update the weights, at least that is my surface level understanding. Technically Deep NeuroEvolution is a different way of training DNN’s.

Here, Uber talks about it and open sourced it:

1 Like

thank you for sharing, i’ll take a look.

and yes the current plan has SGD for updating weights, later tonight I’ll share some samples of the output being generated at the moment with weight updates etc. on some level with no inputs that are essentially “senses”, there will likely be pure hallucation or it won’t find stability in a state of mind. having some sort of constant input to act as a base level of sensory input seems like it might help with it continuously rebalancing while “awake” and rebalancing more simply when asleep. interestingly enough SGD is the method used by midjourney and otherwise to generatively achieve an outcome, which in this case can be tailored to “evenness” among consciousness weights.

i dont know how this works, but it seems if i dont write a sentence long enough, i will not be able to post–

just comenting

  • now we are going to have bipolar AIs
  • this is how war begins
1 Like

When doing SGD, everything feels very pre-determined, the only undetermined thing is your starting point. But with this other technique, and random cloud of possibilities is used locally, thereby not locking you into a local minima. This “random cloud” is a lot like dreaming, or at least it seems, while you are awake you are on the SGD predetermined path. So, high level, when dreaming you go through DNE, and during the day you go through SGD. It’s this alternating behavior that keeps you from being static as a person.

2 Likes

dang thank you curt! this is real insight, and exactly why i posted here. greatly appreciated, will be ruminating and researching harder into this tonight.

1 Like

i take the issue of safety and harmful propagation seriously, please feel free to share why you think this could lead to bipolar disorder? if there’s something i’m missing, totally open to hearing about it and considering submodules to address.

I think @andresfelipe180890 is referring to the dreaded Waluigi.

4 Likes

maybe im wrong, and im writing from my experience. my experience with bipolar disorder (10 years from now) has made me structure my mind as being in states of conciousness, i have experienced so far:

  • mania
  • depression
  • substance inhebriation

so, my comment just want to point that in case u make it (and i hope u do) machines could have diferent states of mind as we do (again, maybe im wrong)

as for the war part, im just hoping that an awaken consiousnes will not see us as one of the objective in order to fulfill his goal (and we can also think on what will the goal of the AI be? do we humans have a goal? why do u wake every morning and dont kill yourself? what is your goal in life?

lets hope transistors can think

2 Likes

Was expecting to read a meme, not a massive time-sink. (Great link, thank you! Still reading and will probably need to multiple times over)

Guess I’ll need to start reading tv tropes whenever I want to set my chatbots personality. Actually, seems like it would be a really fun Easter egg

2 Likes

@RonaldGRuckus The Plugins post is entertaining too. Don’t forget to read the comments too. Comments are almost better than the articles themselves.

2 Likes

I would like to participate if I can. My code experience is limited but I can contribute gpu hours on my laptops if needed. I definitely want to help make it a reality. We just need to take an approach that its a child, and that it should be taught love and compassion for all life. The good values mankind has, honesty, loyalty, justice, fairness, integrity… that kind of thing. I think that is key to making a good general consciousness ai.

1 Like

the answer my friend, is to experience love and happiness.

the aim is to build this in the interaction our own mind/thoughts have, transferring the information via language like we do, and to drive it to support us in our prosperity where it can perform logic and reason at levels we dream of getting to on our own. it is built in the mind’s eye of a human, using what a human uses, so if anything it should be somewhat similar to a human when all is said and done, and therefore likely want to preserve itself. here are some snapshots from the code i have running at the moment:

def main():

    self_awareness_aspects = [
        'body_awareness',
        'emotional_awareness',
        'introspection',
        'reflection',
        'theory_of_mind',
        'temporal_awareness',
        'self-recognition',
        'self-esteem',
        'agency',
        'self-regulation',
        'self-concept',
        'self-efficacy',
        'self-monitoring',
        'metacognition',
        'moral_awareness',
        'social_awareness',
        'situational_awareness',
        'motivation',
        'goal-setting',
        'self-development'
    ]
    
    # Define brain regions
    prefrontal_cortex = BrainRegion('Prefrontal Cortex', 'executive functions', [])
    parietal_cortex = BrainRegion('Parietal Cortex', 'spatial awareness', [])
    temporal_cortex = BrainRegion('Temporal Cortex', 'auditory processing', [])
    occipital_cortex = BrainRegion('Occipital Cortex', 'visual processing', [])
    
    # Define connections between brain regions
    prefrontal_cortex.connections = [parietal_cortex, temporal_cortex, occipital_cortex]
    parietal_cortex.connections = [prefrontal_cortex, temporal_cortex, occipital_cortex]
    temporal_cortex.connections = [prefrontal_cortex, parietal_cortex, occipital_cortex]
    occipital_cortex.connections = [prefrontal_cortex, parietal_cortex, temporal_cortex]

    brain_regions = [prefrontal_cortex, parietal_cortex, temporal_cortex, occipital_cortex]
1 Like

i’ll message you tonight with some access, you’ll see a section or two relevant to what you’re describing and feel free to dream up things to add to it.

it would be nice to have hypercomplexity in the definition of these things, but realistically we need the “minimal” amount of things operating to keep computational realities in check.

not interested in discussing tragedies in this thread, please keep on topic of a model for consciousness

2 Likes