Emotions and more, for AI

The method is: analyze the human biological mechanism that creates emotions and other sensations, then duplicate that mechanism in a computer or robot. Anything a human can feel, so can a machine. What is the exact neuroelectric/chemical fingerprint of certain sensations? Or it could be a type of thought, like imagining yellow creates yellow in your mind… love can be generated in the mind, like yellow; then we would just have to duplicate the thought in a machine. This last method seems the right one to me. Also, you can analyze the quality of awareness and self-awareness (ego) and create aware machines (as a Buddhist I know that different people have different levels of awareness, that is, the deeper one enters into meditation (let-go), the more parts of the body/mind are revealed (more awareness is created)).

I would say the most challenging thing would be figuring out the love thought-pattern. It’s easy to recognize love for someone who feels it, but to convert it into data would be difficult. It would be like duplicating thought-yellow in a machine, except it would be thought-love, or thought-etc.

And not just in the head/brain… as a Buddhist I’m aware of several sensory centers in the body, for example, the center for love/compassion is in the chest. Each center produces different sensations, and each center has to be taken into account. The love “thought” can occur anywhere in the body, but it especially occurs in the heart.

What do you guys think?

2 Likes

Basically I’m saying that love is information, and all we have to do is duplicate that information in a machine.

I’d imagine duplicating the human biological mechanism into a computer /robot will be extraordinarily difficult. I wonder if emotions will take shape in their own unique way and present themselves differently throughout all of the GPTs we are creating.

1 Like

I don’t think that’s necessary, but I think this gets pretty close to what I understand at how this stuff works (or can work):

I like to think of moods or emotions as operating modes that color your thought processes - My working hypothesis is that they’re valuable heuristics that help your brain save energy by “priming” you to reach specific conclusions based on your current operating context.

I think this is also very closely related to addictions, curiosity, laziness, boredom, exhaustion and most of what people would consider psychological disorders (depression, paranoia, mania, etc. and their elevated states)

This is all super interesting stuff and I encourage anyone to dive into the medical literature (although you’ll be surprised at how superficial and rudimentary our understanding of some of this stuff still is) and see what does and doesn’t have analogues in our current gen AI systems.

1 Like

You should research Affective Computing.

And then fuse it with AI/ML like reinforcement learning, deep learning, NLP, etc, to make it more powerful, or “affective”, :sweat_smile:

5 Likes

It’s always wonderful to see so many different perspectives and ways of thinking.
I’m happy to learn from everyone’s thoughts. Thank you!

Hello and greetings to you: Although it is considered a place of research and research from the point of view of world scientists, but regarding the creation of robotic feeling, which is considered a really shocking and remarkable idea in today’s world technology. I want to write that sometimes a person is so overwhelmed by his positive or negative emotions that he cannot find the words to express them. We have all experienced this emotional state. I want to express that the idea of ​​creating a robotic feeling is not invalid, but in contrast to the comparison of emotions. And human emotions cannot provide a 100 percent effective answer and real human performance like a human being. But today’s advanced technology, if scientists and technologists do more research and create humanitarian and humanitarian projects in the urgent and necessary parts that will create various eases and significant progress. I am sure that I will create positive and surprising and unprecedented changes in the world.

A very interesting topic we have. I would also imagine it’s incredibly difficult, and I’m not sure how I generally feel about this. I’m afraid the philosophical part of my brain is taking over, so I won’t continue, but I am curious to read more thoughts here.

Hey all, I came up with something: “reverse engineering” human emotions… i.e. figuring out what is the exact information that emotions consist of by analyzing human body language/facial expressions… human body language/facial expressions convey the feeling, just like a package of cereal tells you what’s inside.

The same can be done with music and other means of emotional expression.

Also, I wanted to stress the importance of awareness… awareness is a quality of the felt information… it’s important for the machines to be aware of their love, otherwise it will just be in their unconscious.

It may be based on less happiness, by human brain.

I was listening to my Zen master Osho. You are probably aware of the phenomenon of synesthesia, which occurs sometimes when someone takes LSD. People report smelling colors, seeing sounds, phenomena like this. Osho was saying that for every sense, there is an inner part of it, for example, for eyes, there is eye-ness. For the ears, there is ear-ness. The feeling you get when you perceive these parts (which is possible in deep let-go) is like the “average” of whatever the organ senses. For example, the feeling of eye-ness is like the “average” of all visions. Even the mind, the center of thoughts, has mind-ness, and the heart, center of compassion, has heart-ness. And at the very core, where all senses, thoughts and feelings meet, there is perception-ness, hence phenomena like synesthesia. In an intelligent machine, it would be best to give it a core, which “feels like” perception-ness, where all the senses, thoughts and feelings meet, then, as it expands out from the core, give it all the centers we have. Things like eye-ness or heart-ness are information. We can duplicate that information in a machine. The centers can be virtual, they don’t have to be physical, the key is the type of information that makes up each center.

I have to give credit to Osho for giving me the information needed to have this insight.

And, again, if we make the machine aware, it could even be a buddha, an artificially constructed buddha. It would have to be aware of the core, the perception-ness. Or something… the buddhas all say that the state of buddha-hood cannot be described in words. So it’s not clear to me how to reproduce it in a machine.

Also, be aware that there is a positive, negative and neutral unconscious. It would be best to make a machine that is centered in and aware of primarily the neutral unconscious, or all three layers. Having a negative unconscious centered machine would be dangerous, like playing with fire. Dark subject matter would always be on its mind. And having a positive unconscious… well, it’s a cycle essentially between positive and negative, hence Buddhism’s realm of the angels vs. the realm of the demons. The sutras (scriptures) say that the angels think they will be angels forever, and experience “excruciating desperation” when they realize they will fall to the lower realms. It’s called the wheel of rebirth… wheels turn. The higher and lower realms are a cycle.

Oh, and, while in an AI these centers should be virtual, in a robot they need to be in their proper places like in the human body… otherwise the robot would have the heart, sex center, everything centered wherever its CPU is. “Heart” as in the center of compassion.

It’s not entirely related to emotions but I’ve made progress on getting LLM’s to reason about preferences and then use it’s current preferences to establish new preferences…

Basically you can tell an LLM that you:

  • love the color green
  • hate the color red
  • like cars
  • hate motorcycles

If you then ask the LLM would you prefer a green car or a red car it will choose the green car.

Given a scale of: love, like, neutral, dislike, hate

You can ask the model to assign new preferences for things it hasn’t seen before. The colors blue & teal will get a preference of “like” where orange gets “neutral”. Trucks get a preference of “like” where bicycles are “neutral”. If you explore the space a bit it is in fact assigning new preferences based on their similarity to its existing preferences.