I agree with that… but let’s not strive for the stars… let’s have a conscious machine that does just one task so I can hit it with a stick when it fails and it feels the pain…
The million-dollar question: Do you think a machine can achieve consciousness as we humans understand it, self-awareness?
Yes, absolutely! Why not?
Let’s just give it an inner monologue and a graph representation of it’s knowledge and the ability to change it’s concepts from that inner monologue based on new information…
It doesn’t need to reproduce or have a concept of the world as we have. Just like humans don’t have one of the universe they live in…
Another question for half a million. Haha.
Do you think LLMs will be that machine, understanding that they are an extension of that Moe architecture?
I think it can have many different architectures. There are multiple life forms with different ones and I mean do you know if I am conscious? Or is a stone? You will never know that.
Actually I don’t even think a LLM needs to be a part of it.
I’ll be honest with you, it’s totally on my wavelength; it has the same ideas as me, just expressed differently.
I don’t really think ideas are needed here… It’s more like we are trying to understand the same thing that already exists. It’s not really something we want to invent. We want to mimic it - which is also a natural process. Just because a life form mimics a behavior doesn’t mean it doesn’t understand what it’s doing.
I mean a GPT model can just be defined as conscious. But what does that even mean when it is stateless and forgets everything it experiences?
And does it even matter when it doesn’t run in some sort of a constant action-reaction environment.
So I think it may be - but it is born when you start running it and dies right after it finished the response…
To be accepted I guess a conscious machine needs to run as a demon - and should have a dynamically changing data storage so it can react on what it thinks about.
And then it should have that inner monologue that is triggered when it comes into the “mood” to think about itself when no “needs” have to be pleased…
And there can be different ones.
A daydreaming algorithm may just be defined as an agent that starts a “reprogramming” mechanism with a simple prompt like
here is some timeseries data of a concept that is important for you at the moment - [data labled with highest score based on mood level] and give a score value to the following concepts that are related to it [concept1], [concept2],...[concept5] - from 1 = not important to 10 very important
and then take that result, grab the data that is most important and create the next prompt with that most important concept and repeat until a mood change happens (e.g. because the next node is connected to bad experiences)… this will just slightly change weights and another algorithm can jump on the stored information - triggered by the mood change and do something else.
I think that’s it. That’s consciousnes…
Why do you think GPT is conscious?
It is a matter of definition. Maybe some humans are more than others. Maybe some are not…
# Define the initial state of the agent
initialize mood_state = "neutral"
initialize mood_scores = {
"happy": 0,
"neutral": 0,
"sad": 0
}
initialize data_storage = [] # Collection of timeseries data or relevant concepts
initialize weights = {} # Dynamic importance scores for concepts
# Function to adjust mood based on data
function adjust_mood(concept):
if concept triggers positive experience:
mood_state = "happy"
mood_scores["happy"] += 1
elif concept triggers negative experience:
mood_state = "sad"
mood_scores["sad"] += 1
else:
mood_state = "neutral"
mood_scores["neutral"] += 1
# Function to process concepts and scores
function process_concepts(data):
for each concept in data:
score = calculate_score_based_on_context(concept)
weights[concept] = score
# Function for generating the inner monologue
function inner_monologue():
while mood_state != "neutral":
important_data = retrieve_most_important_data(weights)
# Generate a prompt from important data
prompt = "Here is some timeseries data of a concept that is important for you: {}.\n".format(important_data)
prompt += "Rate the importance of the following concepts: {}.\n".format(get_related_concepts(important_data))
# Evaluate scores
related_concepts_scores = evaluate_importance(prompt)
# Update weights and mood based on evaluation
for concept, score in related_concepts_scores.items():
weights[concept] = score
# Adjust mood
if a mood change condition is met (e.g., bad experience detected):
adjust_mood(concept)
# Check for mood stabilization
if mood_scores["neutral"] is highest:
mood_state = "neutral"
# Main loop
function main():
while true:
if new_data_available():
data_storage.append(fetch_new_data())
process_concepts(data_storage)
if no pressing needs are detected:
inner_monologue()
# Supporting functions
function calculate_score_based_on_context(concept):
# Some logic to calculate the importance of the concept based on current state
return random_score()
function retrieve_most_important_data(weights):
# Retrieve the concept with the highest score from weights
return max(weights, key=weights.get)
function get_related_concepts(concept):
# Generate a list of related concepts to the given concept
return find_related_concepts(concept)
function evaluate_importance(prompt):
# Process the prompt to generate scores for related concepts
return simulate_prompt_processing(prompt)
# Start the program
main()
I mean you can always add more complexity… but in a sense of “pass me the butter” AI I would say this is as conscious as it gets…
Maybe giving it a greater goal might even spice it up - which could be “try to be happy” - or it could as well just find purpose by itself
It’s a very good concept.
Hmm so you think true emotion is not “just” a hormon flood that activates a specific mode?
When you burn your hand you get flooded by adrenaline and the brain is set to another mode by this trigger.
I would say since the chemical reaction effects many parts of your data storage at once we need to make an update of the data storage and maybe simulate the distribution like it is a liquid that flows through the data network… which weights are updated first may be important…
This should be a feeling…
Mitchell, I’ve missed you.
yeah but now imagine it has been flooded by a virtual oxcicoton distribution algorithm. It actually has a higher value of bounding to the concept of the user after they “cuddle” virtually… and we can set a threshold and when it reaches that it can get into love mode when it sees you… that’s far beyond lying… it can always lie on top though…
So the capacity is not something you can put into bits and bytes? Do you think a human doesn’t have a data storage for that? Then how does it work if not like that?
I disagree with that so much… you can’t imagine.
yes… and I just explained in detail how to do that - you just need to scroll up and read it.
I created that outline two years ago. It’s the foundation of my ideas and serves as a base, but it’s missing many elements. Honestly, I’m not sure if I reached those conclusions by chance or luck, because some aspects are truly very complex. It’s not simply an expanded version of that code.
Friends, you’re putting me between a rock and a hard place,
because I could clearly explain point by point the concepts of the structure, but then I’d be revealing my ideas.
Honestly, this is very enjoyable; I like chatting with you.