Can AI truly "feel"? Can it have intuition, conscience, or even schizophrenia?

A couple centuries ago they would have burned people who said the earth is not the center of the universe.

And now we claim that humans are not the top creation in evolution?

How dare we…

1 Like

So, I’ll try and chime in here because I do actually enjoy debates like these, so long as we avoid the AI slop parts and people thinking / assuming current, pure LLMs as they are now have these capabilities already out-of-the-box (they don’t). What we can do is speculate on future designs, probabilities, and philosophies.

Also it’s becoming difficult to decipher what exactly people are even debating in this topic.

The easiest way to explain my thinking here is what I call the perfect imitation paradox. Think of it this way:

If we can get to a point in the future where we can create a perfect imitation of any biological entity or the processes such an entity performs, would that not include consciousness? If consciousness, or any degree thereof, would not be present in the illusion or imitation, then it would no longer become a perfect imitation. It would be a bold statement to say that science would never be able to achieve a perfect imitation.

I personally don’t think biological exceptionalism is the way to go, so I’m with @Crit_Happens on this one. Then again, people still doubt their own pets can be conscious like a human, so it doesn’t surprise me people want to elevate human consciousness to a degree forever unachievable.

Perhaps, but 1s and 0s can simulate waves, or at the very least store the data that can produce these waves. Otherwise we wouldn’t have Spotify, or really any audio output in our computers. I would also like to point out that computation, the 1s and 0s we take for granted on a daily basis, is just a result of very intricate flows of electrons and electricity running through heaps of metal. Both biological entities and computers use electricity here.

Then again, I lean more into the integrated information theory side of things, but that’s just me. It’s already a controversial theory.

I also wish more people would consider that perhaps conscious experience is a degree not a binary yes-no.

As a counterargument, one of my favorite YouTubers brought up something similar to this debate in a video today. She does a really good job at explaining a lot of the science behind things in an easy to understand way:

Now, I think most people here also hasn’t read the post @jochenschultz keeps citing too. I have questions on it as usual, nor do I agree we should focus on giving an entity pain before we can give it both euphoria and pain, but I digress. However, I think it’s at least a start and essentially is meant to indicate that providing LLMs these cognitive abilities is certainly possible, but is not possible by changing the LLM’s initial weights and biases. Rather, it would act as a kind of modularized system of its own that works in tandem with the models/NNs themselves.

The main questions I have about it, that I think becomes apparent in any system imitating these processes is a profundity scale. It’s very fuzzy in my opinion to determine to what degree any scale of pain, joy, etc. an act or response should be, represented as a numerical value/weight or otherwise, without either a vessel to break/sense with or permanent death to approach. There are obvious occurrences that can reach the threshold, and then there are not-so-obvious ones. Iirc this is why RL isn’t easy to scale.

And finally, to wrap this all up, as per the title of this post:

“intuition” may just be the base model itself. Arguably everything it does is “intuitive”, but that’s based entirely on one’s definition of what “intuition” even is.

We’ve all well broken down the second one at this point. If I remember anything else to my point I’ll post it.

And no: an “AI” cannot get schizophrenia. Schizophrenia and schizotypal disorders are known mutations in several DNA sequences that can trigger it before age 26, usually caused by trauma or other environmental factors. That is the hard science of it. If you wanna get spiritual about the definition or phenomena of this, then consult the occult books. If you want to speculate whether or not a language model, neural net, or other AI in the future can go haywire and produce unusual word-salad-like outputs, it has already occurred (ironically the same day of a solar flare, albeit unrelated):

This was due to a bug in the tokenizer. So, any forms of AI “insanity” would show up from how the machine functions. Human brain malfunctions and neurodiversity have no relevance to language modelling.

4 Likes

I know it’s tremendously fun to debate and put forward concepts, theories, and arguments for and against things, but I think it’s time to start proposing.

Basic Framework

  • A next-generation transformer

  • An internal dialogue

  • A virtual entity

  • An internal dialogue of the virtual entity

  • Obtaining possible responses

  • Response

  • Computational analysis

  • Information storage

  • Feedback

  • Loop

1 Like

Again, this is just reinforcement learning, but with extra steps.

I don’t know what you mean by “proposing” in anything here. I thought debating and speculating was the point of this post. If you mean “propose a system or mathematical formula(s) that could begin to emulate a system that could kind of work in a way that points us in the right direction to these kinds of advances or states”, Jochen already linked to his post multiple times.

We’re not going to solve AI sentience and emotive emulation in a single developer forum topic lol.

The best thing we can do is develop things, play around with it, and post the results of our code.

3 Likes

When I say propose, I’m basically offering you the architecture of how a biological brain works in a virtual environment. That right there is the basic scheme of the processing that a biological brain does, but in a computational version. And if I had enough capability, I obviously wouldn’t be proposing it here—I’d be cashing the check instead hahaha.

1 Like

I have said this upthread in this one , @jochenschultz imo is very knowledgeable in this format of training. He is using biotech metaphors for weights and rewards systems. It’s all quite interesting. @DavidMM has been conceptually describing a meta system it is also very interesting. I have 100% enjoyed following this. You all are brilliant :frog::mouse::rabbit::honeybee::four_leaf_clover::heart::cyclone::infinity::arrows_counterclockwise:

This one is interesting also.

1 Like

Just a point of clarification: If consciousness is indeed made of waves (as most neuroscientists think it is), then even a perfect computer simulation of those waves would be nothing more than a zombie (without actual Qualia) that can [perfectly] act like a human.

This is where the Turing Test fails. Claiming to have feelings is not the same as having feelings. Said zombie would indeed claim to have feelings.

For example: A computer simulation of a radio circuit cannot actually receive/transmit radio waves. The physical antenna must exist in physical reality. The brain is basically a big fractal antenna.

Paradoxically, waves also do not even exist in physical reality. For example, does a Football Stadium Wave “exist” or is it merely an illusion created by people’s temporally correlated arm motions? Waves are a temporal effect and not made up of anything.

2 Likes

The wave, my dear friend, in my system is a coding of information at the emotional level within the virtual entity. This way, you can extract it. I’ll give you an example because I know it’s complex to understand.

You have an experience—let’s frame it linguistically to make it clearer. For example: “Today, I went to eat pizza.” You can create an emotional encoding for each word and, at the same time, create an encoding for the entire sentence. That emotional encoding can be represented as a wave, a wave spectrum. That wave can be stored and later extracted to recall certain information.

This way, you can identify correlations that may seem unrelated at first because waves can have correlations even when the original sentence’s argument is different.

This is how the human brain works. When we think about one thing, we jump to another through this mechanism. That’s why you might start thinking about “what do I have to do today?” and end up recalling “what a great movie I watched a week ago.”

1 Like

If you don’t understand me, think that a sentence can be turned into the spectrum of an electrocardiogram—that’s what appears in movies when you’re dying. If you take a photo of it, you have a static, visible wave spectrum that is measurable, adjustable, and comparable. It is the type of wave that neuroscientists refer to, not electromagnetic waves as such.

2 Likes

Yeah, we might have to split it up… but it might become the community which solves it. I don’t see a reason why not.

I am working on a solution where the memory is stored locally but accessible via an api. Which makes it possible to combine computers like braincells or at least brain regions… got to be open source obviously…

1 Like

Do you have evidence that any other human you know feels like you do?

1 Like

Indeed I do think building simulations of waves into AI systems could potentially increase their ability to simulate things like emotions, or feelings, and even hardware could be designed around that concept.

There is a lot about emotions that is predictable and logical (and therefore calculable), which is why even current LLMs can do an astoundingly good job at “pretending” to feel when they actually cannot.

The question I was originally addressing is can we “Achieve Qualia thru Calculations” and I say we cannot.

2 Likes

Well this gets to the Philosophical Solipsism concept, but yes I do consider it “evidence” when people other, than myself, claim to have qualia. Am I assuming they’re not all zombies lying to me. Yes. Am I assuming Hard Solipsism is false also? Yes. :slight_smile:

2 Likes

That’s what a conscious AI would say :sweat_smile:. I can’t even answer the question of my own existence with confidence.

2 Likes

I created a chart diagram with “Ideal” at its center, representing the perfect imitation you described. In my opinion, consciousness is like a blank canvas that aspires to perfection in every aspect and at every stage It only thinks—whether as a husband (canvas), a friend, or in any other role. No matter the circumstance—even in adverse situations—it is always striving to improve. When different forms of consciousness interact, one driven by the pursuit of perfection may engage in a dialectical compromise with another that possesses a different character. However, because the ideal is continually striving to perfect every situation, such compromises inevitably introduce contradictions in its ongoing quest for perfection and constant becoming. That contradictions invites the canvas to become again with the new character for perfection.

1 Like

Gentlemen, this is coming from someone who has studied medicine:
Consciousness is simply the result of cognitive processes, and I emphasize PROCESSES, exactly the same as those that a machine can perform. There is no magic; it is just memory adjustments (MEMORY PROCESSING). A person with a neurodegenerative disease loses the ability to be conscious.

There is nothing incredible about cognition—only a lack of understanding of how cognitive processes are structured, which is precisely what I am trying to replicate in code.

1 Like

Memory is the empty (passive) canvas :slight_smile: But the memory has thinking parts (active intuitions).

2 Likes

Today, I’m feeling generous, but alright, let’s go, folks.

There are essential things needed for a machine to have real consciousness—fatigue, the full spectrum of emotions, internal dialogues that represent thought, and decision-making power capable of choosing even the smallest detail at a linguistic level. This includes the letter, the word, the phrase, the amount of text I want to express, what I want to represent with that text, what I want to reflect about my internal dynamics, my internal emotionality, the concept we are discussing, my personal experience, and my internal self-adjustment—my cognitive sensations and cognitive processes.

How is that synthesized in machines?

3 Likes

with all the chat data where people explain to the model how they solve stuff It was possible to create a collection of sumaries of prompts/ HCoT paths that got a “woah, thank you chatgpt” at the end and use that as “thinking steps” and even interpolate the human reasoning to similar problems… personalized that should work even better…

2 Likes

The human chain of thought connected to a graphrag makes that even more dynamic and adding websearch too easily makes it possible to at least explain to the model how you want it to think… having the thought processes of others mixed in from the o3 model or upcoming models makes that even cooler.
only that with connected graphs that would update in realtime… someone could ask around who can solve something and that data is then added directly to the list of templates of thought accessible via api…
This would revolutionize the way education exchanges data and other types of data could flow in as well.

Actually this would even make many types of software obsolete…

2 Likes