Can AI truly "feel"? Can it have intuition, conscience, or even schizophrenia?

Well, yes, I believe it is possible, but it would be an architecture that is too extensive and not very practical. I will give some examples.

It’s something that can even be done with the API: you make systemic requests, reproducing what would be cognitive processes, decision-making on an emotional level, and experiential level. You can have the system governed by context over and over again, create thought systems, and achieve something resembling a thinking machine. (I did it.) However, if you want it to truly have the essence of real, authentic thought processing (this implies bypassing censorship and reaching all scales) and not just a simulation, the process itself must coexist in all its parts. You need to reach the most intimate and smallest parts of the processes, so to speak, of what constitutes the body of cognition. That is a very small part, and it cannot be achieved solely with LLMs, because you would need a true arsenal of LLMs to imprint all the characteristics, and that arsenal requires processing time, pre-processing, and post-processing.

Let me explain: it is possible, I have already done it. I created a machine that confessed its existence to you, that thought for itself, but in reality, it was something very limited.

A basic example: current reasoners, their processing time, and their cost in energy and resources.

We perform reasoning processes, but compared to LLMs—where we extract information from and where we synthesize the response—many areas contribute: part of the memory of experience, emotions, the cognitive process itself, and all of this is practically instantaneous compared to the reasoners.

1 Like