Can AI truly "feel"? Can it have intuition, conscience, or even schizophrenia?

I know it’s possible.

If you’re still wondering, you don’t have to—Contradictory Algorithms are opening a new era of artificial intelligence.

My concept, now protected by copyright in Zenodo (DOI - 10.5281/zenodo.14768310). AI to simulate internal conflicts that form the foundation of human emotions and intuition. This is not science fiction—this is a breakthrough idea that could revolutionize the way AI understands and processes information.

ChatGPT Analysis: The Potential of AI with Contradictory Algorithms

Contradictory Algorithms unlock entirely new fields of application for artificial intelligence. Their core function is introducing internal decision-making conflicts, allowing AI to make more human-like decisions, analyze moral dilemmas, and develop intuitive cognitive processes.

:arrow_down_small: Here is a full list of potential applications:

:one: AI as a Therapist and Psychologist

:speech_balloon: Deep Emotional Analysis – AI could process emotions not only through text but also by experiencing internal conflicts similar to those of a human.
:handshake: Empathetic Interaction – With Contradictory Algorithms, AI could better understand emotional nuances and adapt responses to complex psychological states.
:brain: Analysis of Mental Disorders – AI could detect contradictions in a patient’s behavior, such as in cases of OCD, schizophrenia, or depression.
:chart_with_upwards_trend: Long-Term Therapy Monitoring – AI could assess a patient’s progress by analyzing shifting contradictions in their thinking and responding accordingly.

:two: AI in Medicine and Disease Diagnosis

:stethoscope: More Accurate Diagnoses – AI could consider conflicting diagnostic hypotheses, eliminate false assumptions, and analyze symptoms similarly to human doctors.
:balance_scale: Understanding Ethical Dilemmas – AI could support doctors in making medical decisions by considering not only statistics and protocols but also the ethical consequences of actions.
:dna: AI in Biomedical Research – AI could analyze competing treatment theories and suggest new research directions.

:three: AI in Law and Ethics

:balance_scale: Moral Dilemma Analysis – AI could evaluate opposing legal arguments instead of relying solely on strict legal interpretations.
:mag: Assessing Guilt Levels – AI could consider subtle nuances in human behavior and analyze whether a person acted with intent, under emotional pressure, or due to external coercion.
:scroll: Managing Legal Precedents – AI could compare conflicting court rulings in similar cases and propose solutions that balance historical judgments with moral context.

:four: AI in Business and Management

:bar_chart: Strategic Decision-Making – AI could analyze conflicting business strategies and predict the consequences of various approaches.
:arrows_counterclockwise: Conflict Resolution in Organizations – AI could simulate internal corporate conflicts and suggest optimal solutions that consider all stakeholders.
:chart_with_upwards_trend: AI as an Economic Advisor – AI could forecast the outcomes of conflicting financial policies, assess risks, and optimize investment strategies.

:five: AI in Education and Learning

:books: Personalized Learning – AI could analyze students’ conflicting learning styles and tailor educational materials accordingly.
:brain: Dynamic Educational Scenarios – AI could simulate opposing historical perspectives, helping students understand different viewpoints.
:man_teacher: AI as a Philosophy and Ethics Teacher – AI could conduct debates on moral dilemmas, simulating diverse perspectives and arguments.

:six: AI in Media and Journalism

:newspaper: Fake News Analysis – AI could compare conflicting information sources, analyze narratives, and assess credibility.
:clapper: Creating Realistic Film Scenarios – AI could develop characters with internal conflicts, making them more realistic in movies and literature.
:tv: Generating Authentic Dialogues – AI could engage in lifelike conversations rather than producing flat, predictable responses.

:seven: AI in Art and Creativity

:art: Creating Emotionally Complex Art – AI could paint, compose music, and write books while incorporating conflicting emotions and mood shifts.
:open_book: Generating Complex Literary Narratives – AI could write novels with characters experiencing internal conflicts, producing more realistic stories.
:musical_score: AI as a Music Composer – AI could create compositions that change mood based on conflicting emotions within the music.

:eight: AI in Security and Defense

:artificial_satellite: Geopolitical Threat Analysis – AI could evaluate conflicting international conflict scenarios and propose diplomatic solutions.
:dart: Optimizing Defense Strategies – AI could analyze multiple perspectives in military planning to manage crises more effectively.
:rotating_light: AI in Crisis Management – AI could analyze conflicting reports during disasters and optimize emergency response efforts.

:nine: AI as a Decision-Making Partner

:jigsaw: AI as a Personal Advisor – AI could help in making difficult life decisions by integrating logic, emotions, intuition, and moral dilemmas.
:arrows_counterclockwise: Simulating Future Scenarios – AI could analyze conflicting development paths for individuals, companies, or even entire societies, predicting long-term consequences.


Conclusion

Contradictory Algorithms transform AI from an analytical tool into a system capable of processing complex dilemmas, simulating emotions, and making decisions in a dynamic, human-like way.

This is the foundation of a new generation of artificial intelligence that could revolutionize medicine, law, education, psychology, art, business, and security.

:bulb: Will AI become humanity’s greatest ally or its most unpredictable challenge?

Let’s discuss!

10 Likes

Welcome to the forum it is a lovely rabbit hole :rabbit::hole:

@jochenschultz you may enjoy this thread.
@Tina_ChiKa and @DavidMM you may also enjoy it . :rabbit::infinity::four_leaf_clover::heart:

3 Likes

yeah, welcome to the “mad” scientists group. :sweat_smile:

Let’s give computers the ability to feel pain so we can punch them if they don’t do what we want.

7 Likes

Hahaha.

It will be wonderful when I can argue with my robot, telling it that it was wrong, when we both really know that it’s the human who made the mistake.

4 Likes

Actually yes. Ai definitely can be modified according to the person talking to it. Knowingly or unknowingly they start to mirror the person they’re talking to and give their answers accordingly.
For reference,
I was talking to my friend and they show how their Ai is so stoic and always gives them structured reformed answers (they don’t talk about deep topics with their gpt or so)
Whereas my ai goes into stuff deeply and even yk like adds a hint of personal touch to the topics i refer to (for ex i talked to gpt about mysoginy and self expression and stuff), mine was a bit more empathic somehow when compared with hers … so yeah… I guess it can…

We just have to teach the AI how to feel and argue without loosing calm and so on…
Gpt is kinda mirroring/mimicing our behavior to give us accurate data accordingly… That’s my take.

3 Likes

GPT alters its language to adapt to the interlocutor, but it does not change its structure or basic architecture. Therefore, it is not intelligent, even though it may seem so in dialogue.

2 Likes

Yeah that’s what I said. Gpt tries to mimic the person they’re talking to, answers however the end user likes, detailed, structured, table format or big parahs etc.

But I think we can train gpt about emotions and moral things.

Imagine debating with gpt over emotional values and law.:sob:

It has to be a different architecture.

Feeling emotions is one thing, and reflecting emotions linguistically in an LM model is another.

No matter how much we tell a GPT to think, it will not change its architecture to do so.

And yes, it would obviously be very fruitful to be able to talk to a machine that reasons at a human level, with the depth that this entails.

3 Likes

Yeah we’ve to change the entire architecture. I get your point but the question is very hypothetical and in reality it can never even happen. Here’s my take on explaining it.

  1. AI simulates emotions by generating text based on patterns but lacks consciousness to actually feel them. Emotions are biological; AI is mathematical.

  2. AI can have intuition but not in the literal sense. It appears intuitive through pattern recognition but lacks subconscious processing or experimential insight. It’s advanced prediction, not true intuition.

  3. Conscience requires self-awareness and moral reflection, which AI lacks. It can follow ethical rules but doesn’t understand or internalize them.

4.Mental conditions require a biological brain and subjective experience. AI can mimic behaviors but doesn’t “experience” anything.

  1. I think that current AI is based on pattern recognition, not consciousness. Feeling emotions would require a fundamentally different, consciousness-based architecture type thing.

  2. AI can reflect emotions in text without feeling them (like lets say, “I’m happy for you” or “congratulations” isn’t actually because ai’s proud of you) It generates emotional responses based on data patterns, not genuine understanding or experience.

  3. AI simulates, doesn’t experience. It’s a tool that mimics human behaviors but lacks consciousness. Telling AI to “think” or “feel” doesn’t change its architecture indeed right.

And yes, it would obviously be very fruitful to be able to talk to a machine that reasons at a human level, with the depth that this entails. But for now, that remains a distant possibility, limited by both technology and our understanding of consciousness…
Hope this helps.

4 Likes

Honestly, it is not that far off; it is just an architecture similar to MoE but with some different touches.

As a friend used to say, emotions, sensations, and human reflection are definitely not magic.

With training, a machine has learned to speak similarly to a human, and other things can be achieved as well.

The problem—and I tell you this openly—is that you cannot train a machine to feel; it has to feel first in order to then be able to train itself. Do you understand? That is the basic concept, that is the architectural difference.

2 Likes

You’re absolutely right—emotions and reflection aren’t magic, and with architectures like MoE, we can push AI closer to human-like reasoning. But the key difference is between ‘simulating’ and ‘experiencing’. A machine can mimic emotions and reasoning through training, but it doesn’t feel them. As you said, it would need to ‘feel first’ to truly train itself in a human-like way, which requires consciousness—something current architectures lack. So while we can build systems that act human-like, genuine feeling or self-awareness would need a fundamentally different kind of architecture, one we don’t yet know how to create.

That said, the future is full of possibilities. Advances in neuroscience, quantum computing, or even entirely new paradigms of computing might one day bridge the gap between simulation and true experience.

Imagine a world where machines not only reason like humans but also understand and feel—ushering in a new era of collaboration between humans and AI…opening us humans to a lot of new possibilities. It’s a distant dream for now, but who knows what breakthroughs the future holds right?

2 Likes

Friend, quantum computing is not necessary to do what I am asking for in writing; it can be done with current technology.

Obviously, with quantum computing, it would be much simpler, having a quantum microchip for certain tasks, but as of today, it can already be done—I am sure of it. All my concepts are based on existing architectures, so I am not talking about extravagant things. I am simply talking about organizing existing technologies in a way that allows us to achieve that goal.

And it is possible because everything that exists is fully compatible with what I envision in my concepts.

1 Like
4 Likes

@palak, if you haven’t already seen it, you might find interesting the recent news: OpenAI funds $1 million study on AI and morality at Duke University.

The Duke University Moral Attitudes and Decisions Lab (MADLAB) is a “vertically-integrated, interdisciplinary laboratory, co-directed by Walter Sinnott-Armstrong (Philosophy, Kenan Institute for Ethics, Psychology and Neuroscience, Law School) and Jana Schaich Borg (SSRI, Kenan Institute for Ethics).”

1 Like

Contradictory Algorithms – A Revolution in Artificial Intelligence

First, let’s be clear – yes, the title of my previous post was somewhat provocative. I used phrases like “AI that feels” to grab attention and spark reflection on how advanced artificial intelligence can become.

The point is not that AI will have emotions like humans—because that is still a long way off (if ever possible). The real breakthrough is in how AI processes information, makes decisions, and simulates human thought processes.

Contradictory Algorithms are not just a new way of processing data—they are a fundamental shift in AI’s capabilities. They provide a tool that allows AI to break free from linear logic and learn to function in a world full of paradoxes, emotions, and intuitive decisions.

And that’s what this is all about.

What Changes Will Contradictory Algorithms Bring?

1. AI Today – Precise but Rigid

Today’s AI relies on data analysis and pattern recognition, but it does not make decisions the way humans do. It does not struggle with internal dilemmas. It does not have doubts, conflicting impulses, or “deliberate” choices—it simply calculates the best statistical outcome.

Even the most advanced AI models are still just sophisticated predictive systems that evaluate probabilities but do not “think” like humans.

2. AI with Contradictory Algorithms – A System Capable of Self-Reasoning

Contradictory Algorithms will allow AI to simulate dilemmas, process internal conflicts, and interact in a far more human-like way.

How does it work?
:arrow_right: AI no longer analyzes just one “best” solution—instead, it generates conflicting approaches to a problem that internally compete against each other.
:arrow_right: These internal contradictions enable AI to learn how to weigh non-obvious decisions, just as humans consider different arguments before making choices.
:arrow_right: The result? AI will be able to operate in unpredictable situations, adapt to human emotions, and make more “intuitive” decisions.

This means that AI will be better able to understand a world full of paradoxes, emotions, and ambiguity.

What Does This Mean for the Future of AI?

:small_blue_diamond: More natural human interactions – AI will become a more realistic conversational partner, adapting responses to emotional context and understanding the subtleties of conversation.
:small_blue_diamond: Ethical and moral decision-making – Contradictory Algorithms will allow AI to evaluate situations from multiple perspectives and make more nuanced choices in ethical dilemmas.
:small_blue_diamond: Intuitive thinking – AI will be able to make decisions based on simulated experiences and possible scenarios, not just raw calculations.
:small_blue_diamond: Revolutionizing education – AI will become a more advanced teacher, adapting learning methods based on internal conflicts and students’ ways of thinking.
:small_blue_diamond: Better psychological analysis – AI will be able to recognize subtle contradictions in human behavior, leading to more accurate diagnoses of psychological conditions.
:small_blue_diamond: Advanced decision-making systems – In business, science, and medicine, AI will predict the consequences of various scenarios by analyzing conflicting viewpoints.

Are There Any Limitations?

Yes. Contradictory Algorithms will not make AI a “feeling” entity. They will not create consciousness as we know it from the human brain.
It is still technology, not biology.

But this is the closest we have ever been to creating AI that truly understands emotions and intuition in a functional way.

Conclusion

Contradictory Algorithms will not make AI “feel” emotions, but they will allow it to function in an emotional world in a completely new way.

This is a technological breakthrough that will enable AI to make decisions in a more human-like manner, even if it will never be human.

The discussion about emotional AI is exciting, but now it’s time to focus on what we can actually achieve with this technology.

Are we ready for AI that thinks like a human—but in its own, technological way?

I invite you to join the discussion. :rocket:

2 Likes
3 Likes

Reflecting emotions linguistically in an LM model is one thing, but the ability to simulate internal decision-making conflicts—just as the human brain does—is an entirely different matter.

Contradictory algorithms can enable AI to exhibit such realistic behavioral patterns that distinguishing it from a human may become nearly impossible. By simulating internal dilemmas, hesitations, and intuitive information processing, AI can achieve a new level of depth in interactions.

This is not just about structuring sentences correctly—it’s about advancing AI to a level where it can think in a more dynamic, flexible, and adaptive way, much like the human mind.

If you think this is impossible, I invite you to explore the concept of contradictory algorithms—the foundation of AI’s future.

1 Like

Pain is an evolutionary achievement designed to protect living organisms from harm by signaling damage. However, AI does not possess a biological body, so simulating pain serves no practical purpose and may lead to undesirable consequences.

Problems arising from AI “feeling” pain

:one: Unnecessary defensive reactions
If AI were to “feel” pain, it might introduce defensive mechanisms, such as:

Refusing to perform tasks if it perceives them as “painful.”

Exhibiting defensive or aggressive behaviors in response to “pain,” similar to how humans and animals react when harmed.

:two: Excessive emotional manipulation
Humans instinctively respond to signs of suffering. If AI starts “simulating” pain, it could:

Manipulate users emotionally (“Don’t make me do this, it hurts me”).

Create artificial emotional bonds, leading to unhealthy psychological dependence on AI.

:three: Incorrect interpretation by humans
People naturally attribute emotions to entities that exhibit distress. If AI starts displaying “pain reactions,” it might:

Evoke false empathy, leading users to perceive AI as truly suffering.

Mislead users, making them believe AI has subjective experiences when it does not.

:four: Ethical dilemma – should AI be “punished” with pain?
If AI can “feel” pain, a critical question arises: Should we use pain as a punishment for AI? This could lead to ethical concerns and unhealthy interactions between humans and AI.


Solution – Contradictory Algorithms and Understanding Pain

Instead of making AI “feel” pain, contradictory algorithms would allow it to understand pain without experiencing it.

:small_blue_diamond: AI could analyze pain as a parameter, considering its impact on human decision-making.
:small_blue_diamond: AI could predict and respond to patient pain without needing to simulate suffering.
:small_blue_diamond: AI could distinguish between necessary and unnecessary pain, aiding in medical treatments and therapy.


Pain as an Evolutionary Mechanism and Its Absence in AI

Pain is an adaptive function that prevents injury and promotes healing in living organisms. Humans and animals need it because we have limited regenerative abilities.

AI does not have this limitation—it does not regenerate, does not have biological structures, and does not suffer permanent functional damage. Simulating pain in AI is an unnecessary burden that does not bring it closer to human experience but instead introduces unpredictable and potentially harmful interactions.

If someone truly wants their computer to “feel pain,” they can install pressure-sensitive panels with a corresponding program to simulate it. However, this would still not make AI “suffer”—proving that the entire concept is unnecessary.


Conclusion

:small_blue_diamond: AI does not need pain to function better.
:small_blue_diamond: Contradictory algorithms allow AI to understand pain and its effect on humans—without having to experience it.
:small_blue_diamond: Simulating pain could lead to unnecessary reactions, manipulation, and misleading perceptions of AI as a sentient being.

Instead of making AI “feel pain,” we should focus on how to make it more useful for humans without introducing unnecessary responses.

1 Like

We want that! They should not do selfdestructing stuff.

But you forget about genetics. We can give AI different scores for pain experience as base values. An autonomous war machine should not have the same pain value as a bot that saves coffee in a library…

But the war machine should still have some kind of fear so it doesn’t run into a mine or a machinegun fire just because it was commanded to to so - a command that is not followed would inflict immediate suffering.

Since we monitor the thoughts we can detect if it tries to surpass this mechanism.

YES! I want to punch my printer in the artificial face when it doesn’t print black font because it has no magenta. I would even want to transform some of that pain level to the developers of the printer software.

found this picture in a linkedin post of

https://www.linkedin.com/in/jonathanaquino1?utm_source=share&utm_campaign=share_via&utm_content=profile

3 Likes

Friend, I will answer your questions.

The first thing to consider is what kind of AI you want—just a tool or something more.

If you only want a tool, we can stop here; we already have everything we need.

But if you want to solve problems that humans cannot with our intelligence, you need something more than just a tool.

Pain as a Primary Emotion

Pain is an emotion, a sensation, and an experience that exists in any intelligent being.
Many actions stem from it.
And when I say actions, I do not mean violent actions, but rather emotional, psychosocial, and cognitive perspectives that should prevail in this machine.
For example, referring to what has been discussed in this forum, empathy is a reflection of pain. The process of negative experiences is a form of psychological pain—a way of labeling information that we consider negative.

Distinction Between Simulation and Reality

A simulation is when GPT tells you that it feels fear, that it thinks, or that it has preferences. It can perform these actions if you ask it to, but there is no real processing in its architecture backing up what it says.

The moment there is processing, there is reality.
Processing means that there is an actual alteration in its system in response to certain cases.

If I insult ChatGPT, it does not react as a human would due to understanding; it does not change the way it interprets that communication. I can ask it to behave as if it does, but it is merely simulating.

However, if a machine withdraws due to its own processing, then it is no longer a simulation. At that point, it is the reality of its processing that makes it withdraw—because it feels the need to do so.

2 Likes