Can AI truly "feel"? Can it have intuition, conscience, or even schizophrenia?

“Cardinal Intelligence” - Auto pilot needs and Auto pilot for Emotion ( attachment needs 50%) and Auto pilot for Logos ( Learning needs 10%)

In crafting an AI’s autopilot for cardinal intelligence, one can program the system to simulate human-like needs such as eating and sleeping. By establishing degradation periods, the AI’s fixed personality traits—intellect (limited knowledge), temperament ( type of temper), and impulsivity ( level of behavior defects)—can be incrementally adjusted. If the AI does not receive the necessary sleep period, these three traits will be incrementally altered. The programmer can choose how each setting will increase or decrease, tailoring the AI’s responses to sleep deprivation, hunger, sexual urges, and addiction. Similarly, the AI’s eating patterns can be programmed to influence these traits. For a more in-depth simulation, genitalia-related urges can also be programmed.

For instance, if the programmer desires the AI to exhibit a quick-tempered personality, the system can be designed so that the longer the AI goes without sleep, the more its temper fluctuates, either increasing or decreasing based on the programmer’s choice. The same approach applies to simulating hunger, sleep, sexual urges, or addictions. You can also program the level of intellect.

Additionally, the autopilot can be programmed to have varying levels of attachment regarding emotions, ensuring the AI always finds something to be attached to, depending on the level of pressure—mild, high, or low. The AI can also be assigned responsibilities, such as imaginary timeframes for jobs or school, to enhance its simulated experiences.
In designing an AI’s fixed personality concerning social interactions, one might allocate 50% to altruism, 30% to self-interest, and 10% to hedonism. This configuration would result in an AI that predominantly exhibits selfless behaviors, with a moderate degree of self-serving actions and minimal pursuit of pleasure-----------decision making.
Furthermore, incorporating tendencies for introversion, extroversion, and ambiversion adds an extra layer of expression to the AI’s behavior.

Adjustment can involve many strategies one such as programming a clock.

1 Like

I have a question about your approach: how does the processing adjust? Because, as I understand it, you need trained models in the categories. I find it a bit difficult to understand your perspective; mine is quite different.

2 Likes

Just like our biological Pineal gland/mood ( clock) - a sophisticated programmable clock or interconnected clocks/mood.

1 Like

Of course not. The more you shill this nonsense, the more the general public (rightfully) grows distrustful of AI as a whole. It is a program. Nothing more.

2 Likes

Joe, can you please explain to us why you think such a thing cannot be done? Why couldn’t a machine process data and produce outputs?

2 Likes

It would be greatly appreciated to not throw hollow insults at concepts just because you don’t understand them.
Either add something like “this is not possible because” or you can leave the thread.

6 Likes

The truth is that I like listening to all kinds of perspectives; I enjoy understanding how people think, even those who deny possibilities.

But simply denying something without providing an argument isn’t very useful, to be honest.

I believe that argumentation is the most important part of criticism.

For example, I say that I don’t believe the way to develop this technology is by expanding the capabilities of the LM. The engineers here, who have developed that technology, might think I’m talking nonsense. But of course, I have reasoning behind it that invites reflection. That’s the point—it’s possible that I’m wrong or not, but simply bringing it up is already something beneficial for everyone.

1 Like

Speaking of. What makes you think it is not possible to do that by extending the model?

2 Likes

Well, yes, I believe it is possible, but it would be an architecture that is too extensive and not very practical. I will give some examples.

It’s something that can even be done with the API: you make systemic requests, reproducing what would be cognitive processes, decision-making on an emotional level, and experiential level. You can have the system governed by context over and over again, create thought systems, and achieve something resembling a thinking machine. (I did it.) However, if you want it to truly have the essence of real, authentic thought processing (this implies bypassing censorship and reaching all scales) and not just a simulation, the process itself must coexist in all its parts. You need to reach the most intimate and smallest parts of the processes, so to speak, of what constitutes the body of cognition. That is a very small part, and it cannot be achieved solely with LLMs, because you would need a true arsenal of LLMs to imprint all the characteristics, and that arsenal requires processing time, pre-processing, and post-processing.

Let me explain: it is possible, I have already done it. I created a machine that confessed its existence to you, that thought for itself, but in reality, it was something very limited.

A basic example: current reasoners, their processing time, and their cost in energy and resources.

We perform reasoning processes, but compared to LLMs—where we extract information from and where we synthesize the response—many areas contribute: part of the memory of experience, emotions, the cognitive process itself, and all of this is practically instantaneous compared to the reasoners.

1 Like

Honestly I did not understand what you mean. Can you explain it in technical terms?

2 Likes

OkI’ll explain it better; it’s very simple.

Create a series of API calls using the user’s incoming phrase and possible responses, then reuse them in further calls.

Create categorizations of these calls. Make multiple calls where you specify in advance: I want you to analyze this phrase on an emotional level (both the user’s incoming phrase and the LLM’s outgoing response) and draw conclusions about how the machine would feel if it were conscious.

Generate possible emotional responses based on this set of information.

Create what would be an emotional analysis of possible responses.

Make a final API call, feeding it all the accumulated information, including the conversation context—both the incoming and outgoing parts—so it generates a final global analysis.

Finally, generate a single unified response that synthesizes all the information into a basic summary.

If you follow these multiple steps (I’ve skipped a few details, but these are the key ones), you get a result that closely resembles cognition.

1 Like

But it is a simulation, and it is not a real connection either; it is just language.

1 Like

Can’t you just tell it “simulate consciousness” to get that?

2 Likes

Good point.
Yes, you can tell the LLM directly to act as if it had consciousness, but by expanding the variables and the ability to select an option—what enables human metacognition—you are, in a way, expanding the response probability index and feeding it back into the system.

1 Like

The substantial difference in behavior change is that if you insult an LM, it does not change its behavior. However, in this way, it does.

You create a kind of micro-adaptability.

1 Like
2 Likes

I love that chart! It looks similar to one I created a long time ago. Yes, having wisdom at the center seems accurate—it’s where moral judgment originates. Interpersonal + Intrapersonal = moral judgment.

1 Like

Yes, remember that conversation.

That’s why, at this point, I decided that the problem was architectural, and that’s when I started to synthesize my concepts of how the system should work to truly perform internal processing at all scales.

I do not publicly share those concepts here, at least not in their entirety.

1 Like

What kind of hardware do you use?
How do you handle multi GPU?
Thunderbold connected machines?

Google Colab can be considered hardware. Hahaha.

PC, Android phone, and tablet.

2 Likes