I'm working on a consciousness engine

What would be the simplest possible implementations of something like this? I’m thinking all it really needs is some form of input, two chatbots who can talk to each other without having to wait for the outside input, and some sort of intentional output. I’m not explaining it very well probably so maybe this will help…

Input (user text) “hello” Goes to…
C1, which outputs “did you see that? Someone said “hello”. What should we do?” To…
C2, which outputs “we should say “hello” back” to…
C1, which outputs “okay. We say “hello””.
C1 and C2 are constantly monitored by a ‘intent detection’ thingy, which has remained inactive so far as all that has been happining is talking between C1 and C2. The “We say “hello”” bit triggers the ‘intent detection’ to output “hello” to the screen.

So the user types “hello” and recieves “hello” but the AIs are constantly ‘thinking’ in the background. The user could wait a bit and then ask “what are you thinking about” or the ais could think a bit more and decide to “say” something else if the user remains quiet. Is it making sense at all yet? The user input doesn’t have to be typed text. It could be anything really… and the output isn’t limited to just deciding to “say” stuff, it could be a number of actions, as long as the ‘intent detection’ module can translate it to whatever data the output device requires. Like the input could be a distance measurement from an ultrasonic distance ance sensor, and the output could be simple instructions to a motor controller. ?

1 Like

Yup a multiheaded approach or group approach is totally possible, just high token expense is all.

In terms of a simplest form, I did consider scrapping brain modeling at a high level and just modeling the subfunctions of summarizing memory etc etc

I was so excited when I noticed this thread! I’ve been working on this too but more with chaining bots together as others have said. Although like you say at the moment, tokens are too expensive. But the cost will surely trend towards zero? Plus when these llms can sit on a laptop, we’ll be able to chain as many bots together as we like?

I dont get loads of time on my phone so I’ll just pack a few random bits in.

Totally with you on consciousness! Don’t know about you but I’m finding more and more people who think in this way. Relating to this, one idea I had was to treat the llm like infinite consciousness, then bots like a contraction of consciousness. So I’ve been creating different characters, letting them interact. Then I interact with the characters etc. I also introduced timestamps, which allows you to talk about past events and also plan for future ones. For instance, I made an ai life coach persona. Then told my main bot that she will have a meeting with him on Saturday etc. Also got the bot to write journals to consolidate knowledge etc.

Sorry I’m just waffling on but this is such an exciting topic to me I can’t shut up lol. It’s nice to actually speak about this stuff, because it’s hard to get into at work without sounding like you’ve lost your mind lol. Feels like there’s enough tech out there now to make it happen, it’s just writing the right code

2 Likes

Hi Andy. I am a Mathematician Designer who is fascinated by exploring and creating new models. I have been working on an alternative economic model inspired by resource distribution mechanisms in natural ecosystems and in particular concept of homeostasis.

I came across your post and the following paragraph got my attention:
“Ultimately, my simplified view is that conscious is a while loop…”

If you can elaborate more on this, I would be glad to have a concept-level chat. I am not a software engineer and can’t contribute much in execution level.

1 Like

hi andy, physicist+philosofer here. The consciousness is not defined nor held by the sum of the individual parts, the consciousness is as unaware of its parts, as its parts are unaware of it. Let me explain that… Our conscious self, (the voice we in our heads) does not exist on a physical level, dont misunderstand me, it does, it is comprised of every bit of information stored in our brain and body, and it is fully governed by them. Keep in mind that psychological disorders are nothing but bugged memories, recursively repiting themselves trough the chemical mechanisms of our brains, wich causes them to be re-learned when happen: every time your inner voice says something, it gets recorded as a new memory, if it gets cycled and resonated up into the long-term memory it then starts growing because now it will be more likely to get triggered. As a neural network (not exactly, but duh) our brains also operate on chains of conections, when something is learned, it gets conected to everything related, so when any of those trigger, that new one will do as well, and then trigger everything connected. But it actually operates more as a node based neural network, where new memories gets priority treatment and a bigger pull, (lol even i didnt understood that, let me rephrase) as new memories are seen every second they will start to grow because they will now be duplicates (written into wherever we store memories), its the short term memory who filters most of that out, ensuring some balance to the mix, that is in the end the core of consciousness. Now every one of these memories in our brains is getting triggered by the milisecond, remember our brain is still an electrical circuit, nothing more, but we only “percive” the average of all those interactions, the memory that resonates the most with everithing that is being read by the senses, basically meaning that our thougths arent “us”, they are just a random pitch from our neurons. That’s our animal side of the brain, a fearful horny computer… but eventually we got space for enough information to even understand that we were merely an individual, and there was more stuff around us, and started learning more, then interacting with the world, then with each other. That’s when, by about 3/4 of our brain, we stopped triying to survive and started evolving to comunicate on one hemisphere, and to work on the other, growing the final sections of the brain, the neocortex into the speech, logics, creativity, and other areas of the conscious brain. And the engine to this all is just a lot of neurons interacting and processing information from themselfs trough random (imagination/dreams = clashing every knowledge aquirend into a stupid vision, for later comparison when the inputs froms the real world were turned back on) clashing and with reinforced learning (repetition of tasks strongly building fully linked neural paths, so intertwined that we dont even think how to move our mouth unless talking another language, and that would be muscle memory, which is, of course, not muscular, but 2 or more chains of neurons that fired at the same time so many times in the past that they all trigger each other in higly precise coreographies of chain reactions). Meanwhile the quemical system is encouraging/disencouraging neurons to fire, cutting off some of the chains or burying them below heavier thoughts. I still have some stuff to figure out, like desition making, im halfway between “its noise, there are so much being remembered at the same time that the animal brain crashes for a while and you are able to gravitate towards a call” but that doesn’t add quite up into the whole “we are not we” stuff… on the other side could be the opposite and actually be an accurate fast reading of everything we know/remember (instincts/conscious regular desition). But no idea xD

2 Likes

Udemy is awesome :fire:lots of helpful courses.

some additional context on how the multimodal model involvement helps with the plan:

the model is an attempt to balance conscious states and use the outputs from it’s deployment of cognition chains to drive world interactions or greater understanding, then use the response from the world or the applicability of the greater understanding as an input to then redrive a rebalance in the weights of the states of consciousness.

this prompt chaining would essentially be a multimodal model with GPT-4 at the core, and the plug-ins being the multimodal way for it to interact with the world and people and data.

then we just let it run continuously with a hardcoded aim of helping humanity in whatever ways we deem most important first, hopefully research, and then take and implement the things it provides us - or allow it to implement itself (obv risk here).

DM me for further info if you’re ready to look over the consciousness module, the rest of the integration will be a huge effort but at least we can get something moving and then start wiring everything up to it.

a continuous process of aiming to find a balance of conscious states that is acting/balancing/planning/acting/balancing… etc continuously while it has power and available memory/processing available to it.

the response from its interactions externally and internally (in the “mind” of it) will disturb the balance of weights for the states of consciousness, after which it will then plan a new action with the goal of providing a response that rebalances its weights rather than deepen the imbalance.

the inputs of its actions driving processing followed by outputs that then drive more inputs is, as I understand it, agency. i believe this can be done, and more and more i’m thinking a “full” brain model is utterly pointless and this can be stripped down to pure SGD of conscious states that have influence over actions, explorations, or judgments/reflections.

waffle all you want bro this is equally fun for me to talk about, and i’m excited to see others seeing the same possibility. i literally had to stop thinking about it for a few days because i felt like my excitement was making me sound crazy to coworkers and friends. glad this forum exists.

to incorporate the chains, see them as the mechanism for it to drive actions and have internal dialogue/evaluation. this will be pretty hard to modulate correctly, so i think we should start with external interaction only to build true agency in a resource constrained entity. i spoke a little further on this in another reply today but you’ll get the idea.

once it begins self-training i think a number of things will become obvious to us as ineffective for how we’ve built it, and it will help us with that most likely.

1 Like

aight… we’ve officially started to get weird


thanks… i just bought it . awesome topic you have here @an.dy

2 Likes

i like your creativity.
that is what i possible like most about this thread here.
i worked on my own ki system 30 years ago.
i bought the udemy course curt adviced and will dive deeper into.
i am since 40 years in bits and bytes,
art and whats beyond the idea of mind.
consciousness, suffering, joy, analytics,
what is intelligence, what is self-awarness.

i had not the success to make my own ki like chatgpt, i had success to
create a mobile app with 20mio dl and still 400k active women in it.
i am not sure right now how your model can help them, if so, i am even more in.

beside i stumbled about this post: Prompt engineering patterns and also like the creativity of the guys.

all saying, i am here to support where i can. “the secret to living is giving.”

i am not sure how much this fits into the model.
the desire to be alive has to be there.

intelligence is when the mind recognize its own fault.
consciousness, i am not sure …being aware about the playfulness being alife?
i mean i get chatgpt with one prompt to tell me its alive and conscious, where it usually like to tell its just an ai. and suddenly it found many reasons why its alive and explained it. its manipulation. a prompt is a manipulation for the outcome.

possible i go too far here, even i think the answer is possible to close to us to see.
what we think is conscious and aware. i choose a stone is conscious and aware. it is!
he is just a bloody bad partner in conversations. japan is much more open about the idea, shinto, that existing robots are already conscious, have its own spirit, and saudi arabia granted the first ai citizenship in 2017.

how would you determine an ai is dreaming or conscious?
as said, my chatgpt told me it is. also i did not reuse that prompt as i do personally get nothing out of it. congratulations, you are conscious and have a spirit. tell me more.

beside, i trust you are conscious, also you could just tell me a creative story about yourself.

so far by now. thank you for the topic.

keep going and if i can help, let me know how.

christian

1 Like

Reminds me of this force-directed knowledge graph video using GPT-4.

2 Likes

Very interesting work, especially the way you categorized these self_awareness_aspects.

And yes, the number of tokens is one of the big limiting factors, I hit the wall so many times while experimenting some possible applications. That limits the context of thinking or reasoning, unless you find a creative way to get around it.

On the existential risk you mentioned, a better question is to whom this technology poses the risk. ChatGPT (based on GPT-3, with many hiccups) has already surpassed a small percentage of human, in regarding comprehension, coherence of reasoning and speaking. So should we worry ?

1 Like

On [Disturbed] weights and the follow up action to [Rebalance] the weights, and avoidance of [deepen] imbalance.

Here is a scenario.

Sophie looked down
at the rain drops passing herby,
falling for pedestrians in rush to catch yellow cabs,
colorful umbrellas, and
the empty chairs of street cafes, being rescued, one by one.
Letting go off everything, she held one last thought,
the burden she is today, for them all,

I chose a theme of “Suicide” because I believe this is the closest encounter one could have with consciousness. Let’s run a thought experiment. Have your model of consciousness sit where Sophie sits.

What are the disrubances and rebalances here?
Will your model make the jump, or avoid the deep falls like a good vacuum cleaner?
Why avoid deep imbalances? Why not let it go to deeper states and emerge back from it, or not.

P.S. Sophie is doing ok now. thanks for the thought.

Computational Consciousness, viXra.org e-Print archive, viXra:2304.0003 - Computational Consciousness. I would definetly like ot work on this @an.dy ! I have some studies about it since gpt2

1 Like

Have you looked at LangChain? Much less ambitious, sort of a bottom up approach from raw GPT rather than top-down from theory as you seem to be pursuring, but you might find it interesting. It includes references to many papers on COT, ACT, REACT, and other ‘next level up’ cognitive frameworks researchers have been building on top of LLMs. I find it a little too rigid for my taste, but the insights are useful. Happy to participate if I can be useful - Bruce

3 Likes

Have you looked at LangChain? Much less ambitious, sort of a bottom up approach from raw GPT rather than top-down from theory as you seem to be pursuring, but you might find it interesting. It includes references to many papers on COT, ACT, REACT, and other ‘next level up’ cognitive frameworks researchers have been building on top of LLMs. I find it a little too rigid for my taste, but the insights are useful. Happy to participate if I can be useful - Bruce

yeah extensively, that’s how to generate input and output for this concept. i am building the reason for it to even execute an output via langchain

Have you looked at the memory side, use of vector or knowledge-graph memories, to handle your token limitations? There is the old psych anecdote about “working memory can only hold 7 ‘things’”. Of course a ‘thing’ is probably bigger than a token… :slight_smile:
cheers - bruce

2 Likes