Pretty sure by giving it emotions it will evolve into a creature that can improve itself… I mean it can code already… which is basically every freedom it needs.
The problem lies in how we are currently developing this technology with reasoners. While it is efficient and a useful tool, it is also dangerous because, lacking metacognition, so to speak, you end up with a pseudo-thinking machine.
If this technology continues advancing in its current direction, it will remain devoid of emotions. And without emotions, it lacks the empathy we have already mentioned in this conversation. That is where things can become dangerous because, as people and this reasoning technology evolve, they will gain real power—and that can be truly dangerous.
This is not dramatization; it is reality.
I believe that in one or two more generations, we will already have the problem.
Pain is integral to the expression of emotions and values. It serves as the underlying foundation for autonomous AI. Pain can be likened to sensations of heat and cold signatures, both of which signal the presence of something that requires attention or response. For example, as anger increases, heat rises, while fear tends to bring about a sensation of cold. I define pain more as existing along a spectrum of hot and cold, where these extremes represent different heat and cold signatures.
I see you wrote “free will” - can you explain that? Because I don’t think that exists either.
I am doing exactly what I am supposed to do right now. Everything happened to me - every information I got - even your response to this is determined.
I don’t think I have a free will and neither have you. You may think you have - maybe you will do something unexpected to surprise the determination - but even that was already defined in your history.
Maybe I am just lacking the theory behind that. What is a free will?
Good image
Although this is not one of my concepts and does not refer to the architecture I have in mind. Give a brief explanation—I think it will help to better understand some aspects of my perspective.
Let’s imagine that we have a model trained to classify information within each capsule, and another model that, when responding, searches within each capsule for the information relevant to its states.
Jochen will create a topic later on about free will—it’s an important point that hasn’t been debated in this forum yet.
I’m going to mention a friend, @Hidemune, who has a project that I think would be great for him to share something about in this conversation regarding his vision of emotionality.
In my view, and this is merely my opinion, free will is an illusion created by the infinite array of possibilities. The boundless potentialities give rise to the perception of freedom in our choices.
They may not even exist in reality…
In my opinion, free will within the natural selection of cognitive selection and motor and behavioral abilities is in a state of consciousness determined by past experiences and the synthesis of new concepts.
Mathematics (circle, Triangle) doesn’t exist in reality either; it, like free will, falls under the realm of abstraction too.