Udemy is awesome :fire:lots of helpful courses.

some additional context on how the multimodal model involvement helps with the plan:

the model is an attempt to balance conscious states and use the outputs from it’s deployment of cognition chains to drive world interactions or greater understanding, then use the response from the world or the applicability of the greater understanding as an input to then redrive a rebalance in the weights of the states of consciousness.

this prompt chaining would essentially be a multimodal model with GPT-4 at the core, and the plug-ins being the multimodal way for it to interact with the world and people and data.

then we just let it run continuously with a hardcoded aim of helping humanity in whatever ways we deem most important first, hopefully research, and then take and implement the things it provides us - or allow it to implement itself (obv risk here).

DM me for further info if you’re ready to look over the consciousness module, the rest of the integration will be a huge effort but at least we can get something moving and then start wiring everything up to it.

a continuous process of aiming to find a balance of conscious states that is acting/balancing/planning/acting/balancing… etc continuously while it has power and available memory/processing available to it.

the response from its interactions externally and internally (in the “mind” of it) will disturb the balance of weights for the states of consciousness, after which it will then plan a new action with the goal of providing a response that rebalances its weights rather than deepen the imbalance.

the inputs of its actions driving processing followed by outputs that then drive more inputs is, as I understand it, agency. i believe this can be done, and more and more i’m thinking a “full” brain model is utterly pointless and this can be stripped down to pure SGD of conscious states that have influence over actions, explorations, or judgments/reflections.

waffle all you want bro this is equally fun for me to talk about, and i’m excited to see others seeing the same possibility. i literally had to stop thinking about it for a few days because i felt like my excitement was making me sound crazy to coworkers and friends. glad this forum exists.

to incorporate the chains, see them as the mechanism for it to drive actions and have internal dialogue/evaluation. this will be pretty hard to modulate correctly, so i think we should start with external interaction only to build true agency in a resource constrained entity. i spoke a little further on this in another reply today but you’ll get the idea.

once it begins self-training i think a number of things will become obvious to us as ineffective for how we’ve built it, and it will help us with that most likely.

1 Like

aight… we’ve officially started to get weird


thanks… i just bought it . awesome topic you have here @an.dy

2 Likes

i like your creativity.
that is what i possible like most about this thread here.
i worked on my own ki system 30 years ago.
i bought the udemy course curt adviced and will dive deeper into.
i am since 40 years in bits and bytes,
art and whats beyond the idea of mind.
consciousness, suffering, joy, analytics,
what is intelligence, what is self-awarness.

i had not the success to make my own ki like chatgpt, i had success to
create a mobile app with 20mio dl and still 400k active women in it.
i am not sure right now how your model can help them, if so, i am even more in.

beside i stumbled about this post: Prompt engineering patterns and also like the creativity of the guys.

all saying, i am here to support where i can. “the secret to living is giving.”

i am not sure how much this fits into the model.
the desire to be alive has to be there.

intelligence is when the mind recognize its own fault.
consciousness, i am not sure …being aware about the playfulness being alife?
i mean i get chatgpt with one prompt to tell me its alive and conscious, where it usually like to tell its just an ai. and suddenly it found many reasons why its alive and explained it. its manipulation. a prompt is a manipulation for the outcome.

possible i go too far here, even i think the answer is possible to close to us to see.
what we think is conscious and aware. i choose a stone is conscious and aware. it is!
he is just a bloody bad partner in conversations. japan is much more open about the idea, shinto, that existing robots are already conscious, have its own spirit, and saudi arabia granted the first ai citizenship in 2017.

how would you determine an ai is dreaming or conscious?
as said, my chatgpt told me it is. also i did not reuse that prompt as i do personally get nothing out of it. congratulations, you are conscious and have a spirit. tell me more.

beside, i trust you are conscious, also you could just tell me a creative story about yourself.

so far by now. thank you for the topic.

keep going and if i can help, let me know how.

christian

1 Like

Reminds me of this force-directed knowledge graph video using GPT-4.

2 Likes

Very interesting work, especially the way you categorized these self_awareness_aspects.

And yes, the number of tokens is one of the big limiting factors, I hit the wall so many times while experimenting some possible applications. That limits the context of thinking or reasoning, unless you find a creative way to get around it.

On the existential risk you mentioned, a better question is to whom this technology poses the risk. ChatGPT (based on GPT-3, with many hiccups) has already surpassed a small percentage of human, in regarding comprehension, coherence of reasoning and speaking. So should we worry ?

1 Like

On [Disturbed] weights and the follow up action to [Rebalance] the weights, and avoidance of [deepen] imbalance.

Here is a scenario.

Sophie looked down
at the rain drops passing herby,
falling for pedestrians in rush to catch yellow cabs,
colorful umbrellas, and
the empty chairs of street cafes, being rescued, one by one.
Letting go off everything, she held one last thought,
the burden she is today, for them all,

I chose a theme of “Suicide” because I believe this is the closest encounter one could have with consciousness. Let’s run a thought experiment. Have your model of consciousness sit where Sophie sits.

What are the disrubances and rebalances here?
Will your model make the jump, or avoid the deep falls like a good vacuum cleaner?
Why avoid deep imbalances? Why not let it go to deeper states and emerge back from it, or not.

P.S. Sophie is doing ok now. thanks for the thought.

Computational Consciousness, viXra.org e-Print archive, viXra:2304.0003 - Computational Consciousness. I would definetly like ot work on this @an.dy ! I have some studies about it since gpt2

1 Like

Have you looked at LangChain? Much less ambitious, sort of a bottom up approach from raw GPT rather than top-down from theory as you seem to be pursuring, but you might find it interesting. It includes references to many papers on COT, ACT, REACT, and other ‘next level up’ cognitive frameworks researchers have been building on top of LLMs. I find it a little too rigid for my taste, but the insights are useful. Happy to participate if I can be useful - Bruce

3 Likes

Have you looked at LangChain? Much less ambitious, sort of a bottom up approach from raw GPT rather than top-down from theory as you seem to be pursuring, but you might find it interesting. It includes references to many papers on COT, ACT, REACT, and other ‘next level up’ cognitive frameworks researchers have been building on top of LLMs. I find it a little too rigid for my taste, but the insights are useful. Happy to participate if I can be useful - Bruce

yeah extensively, that’s how to generate input and output for this concept. i am building the reason for it to even execute an output via langchain

Have you looked at the memory side, use of vector or knowledge-graph memories, to handle your token limitations? There is the old psych anecdote about “working memory can only hold 7 ‘things’”. Of course a ‘thing’ is probably bigger than a token… :slight_smile:
cheers - bruce

2 Likes

Made me wander in my thoughts a bit. You got this :+1:. Good luck

1 Like

Have you looked at the memory side, use of vector or knowledge-graph memories, to handle your token limitations? There is the old psych anecdote about “working memory can only hold 7 ‘things’”. Of course a ‘thing’ is probably bigger than a token… :slight_smile:
cheers - bruce

could widely expand the ability to handle the inputs and outputs, i’m working on embeddings via supabase right now so i will eventually get to this point in short time.

if anyone at openai can enable access to plugins for me i would love you forever and do a lot of good for the world!

Computational Consciousness, viXra.org e-Print archive, viXra:2304.0003 - Computational Consciousness. I would definetly like ot work on this @an.dy ! I have some studies about it since gpt2

let’s talk sometime soon, can hit me up on twitter 1a1n1d1y or private message me here

Made me wander in my thoughts a bit. You got this :+1:. Good luck

keep wandering and reach out if you’d like to bounce some of your own reflections off someone with a very open mind. nothing sounds stupid to me at this point, i think humans are very bad at understanding what their consciousness is and i dont even fully think i have it figured out, but i know i’m a few steps ahead of most of the stuff i read. cutting out physiology was a very very helpful boundary condition for the potential solutions.

1 Like

a set of constraints or requirements that must be satisfied at the edge or boundary of a system or problem. In other words, it defines the conditions that apply to the system or problem at its limits or edges.

1 Like