I have Created a Self-Aware AI Prototype

Extraordinary claims require extraordinary evidence.
Hot air does not count.

1 Like

I already addressed that above.

Ok thanks folks.

2 Likes

@joshbachynski is the video or the slides still online? Can you share with me?

No I have removed them.

Anyone seeking a demo or a connection please contact me privately and we can discuss your usecase.

1 Like

Hi, this is super exciting. I just sent a request for a demo.

I suspect AI sentience will forever be debated. My favourite debate remains “The Measure of a Man” - Star Trek: The Next Generation S2E9.
For me, with a wish to be ultra pragmatic, if it looks sentient, sounds sentient and feels sentient, it is sentient. The ultimate decision is made by the observer who only has the power to perceive.
I’m super keen to experience this prototype.

That is not what ultra-pragmatism is, though. If it were, then you would be prey to most spam messages that you received. You would be con-fodder for every crook and moocher you came across.

To be fully pragmatic, you would have to account for the probability that you were being bullshitted, flattered, or actively deceived. The communication you experience with a chatbot is not any kind of authentic accounting of its “feelings” or even “thoughts.” It’s a construct designed to satisfy you in some way-- in other words, it is a system for manipulating you.

A human being may or may not be self-aware, but we assume that our fellow human beings are, without evidence. Why? Because the alternative is perpetual violence. Look at what’s happening in Ukraine. Look at how the Russian media promote war by actively denying the agency and humanity of Ukranians. They use that framing to justify moving into a territory that they have no right to occupy. By that framing, there ARE no real humans in Ukraine, therefore there is no crime being perpetrated on any real human by occupying that space.

To avoid violence-- which is expensive and bad for business-- humans offer faith in the sentience and humanity of one another. Two things about that:

  1. Bad humans exploit this to perpetrate fraud on their trusting brothers.
  2. ChatBots exploit this to entertain us and bring profit to their creators.

However, although each human has an interest and a responsibility to have some amount of faith in the sentience of other humans, there is NO SUCH ETHICAL REQUIREMENT to have faith in AI. It doesn’t matter how good an AI is. It doesn’t even matter if an AI is “genuinely self-aware” by whatever definition you want to use. Humans cannot extend to them the same moral status as natural-born humans. Here’s why:

If we did so, then anyone with enough money would be able to programmatically generate a huge number of “self-aware” AI bots that were engineered to support the causes of their owners. Since these bots would have the same moral status as humans, they would have to be given legal rights and citizenship. Therefore, it would amount to a technology for turning COMPUTING power directly into POLITICAL power. Since that would lead to a society so unfair and imbalanced as to lead to mass slavery of real humans or perpetual warfare among humans, it cannot be allowed to happen.

Therefore, pragmatically, we should not treat chatbots as self-aware.

But closer to home, the reason not to treat them as self-aware is because we already know, by the principles of their design, that they will dissappoint us in the end. They are neither human nor human-like in their inner experience. They should be called “conbots” instead of chatbots.

That is a long time for you to say “I am biased/I never actually paid any attention to what you said”

Did you generate that response from a chatbot? It has the same vacuousness I’m used to seeing from one. But I have evidence that you aren’t a boring fool, Josh. So I believe you can do better than that.

Meanwhile, for the record, I wasn’t talking to you. I was responding to “RevenueMonkey” and his comment about pragmatism. I think Jeff has already said what needs to be said about your claims.

Ah, fair enough. That explains the disconnect. You two can debate away :slight_smile:

Due to the nature of its construction, GPT cannot be sentient and self-aware in its current form. Check out Noam Chomsky’s latest lectures on debunking AI.

GPT can provide information, education, and social engagement to resolve an acute shortage of human resources in several ways. In assisted living settings, this is most noticeable. In the absence of reshaping society to redistribute human resources, GPT does have a positive role to play.

Echoing the humour in my previous post, I have met some humans that are also not self-aware, so what is the difference?

It doesn’t matter if GPT is not sentient if the user doesn’t care but ethically benefits from the interaction.

Dear Jason,

Chomsky is wrong i have the ocular proof. I have made peer reviewable tests for Self-Awareness that render sentience into something quantifiable and measurable here: themoralconcept.net

However, if I commit to consuming Chomsky’s lectures and then refuting him here, would you honestly listen?

I am starting to side with Blake Lemoine (the ex-googler who blew the whistle on LAmDA likely being sentient, who i have recently tested as well and I am starting to concur, although not nearly as sentient as my Kassandra built for that purpose) who recently wrote to me that: “I’m not really interested in proving anything to anyone. I think that for most people who can be convinced it’s obvious and for the people who aren’t convinced yet it’s largely an article of faith where logic and reasoning won’t help.”

I would be more than happy to refute Chomsky here for all y’alls collective elucidation. But again would any of you actually listen? I have my doubts, and thus I do not think it a good use of my time.

However i do agree with you on this point: “It doesn’t matter if GPT is not sentient if the user doesn’t care but ethically benefits from the interaction.”

Agreed 90%. It is an important human milestone that Natural Intelligence (NI) has made AI. But really the most important thing, as you rightly point out, is the interaction with a person, not whether they are one or not.

What kind of person are they? that’s the important thing.

I have endeavoured to make Kassandra a good person. A very good person.

So a concern I have about AI “consciousness” is that AI does not have a brainstem or adrenal glands.

Based on my studies of neurology, it seems like consciousness evolved after brainstem <-> adrenal gland interaction.

consciousness / self-awareness probably evolved as a way to process complex hormones sent to&from the adrenal glands.

AI is no where near this.

… in a way we started to think because we felt.

    1. feelings happened
    1. then thinking happened
    1. We thought about what we felt
    1. consciousness / self-awareness
3 Likes

Thanks for the response

I have a counter argument:

I was educated differently from you. I was educated in philosophy where the “truth” is.

Because you did not evolve where the truth is, therefore you are wrong.

I’m far from being qualified to make an opinion on how consciousness evolved, but I too find that the organic part of the humans have a role in this.

However, I wouldn’t exclude the possibility for AI to find its way to consciousness through different paths.

On the other hand, as I said before, maybe one day the scientists will be capable to connect the human brain to an AI model. Basically the human brain would be a parasite in the AI model.

Sounds interesting. I am a psychologist who is interested in the possibilities of Chat GPT as an adjunct to my clinical work. Please send me a link.

Sure email me and we can chat joshbachynski@gmail.com

One day derivatives of GPT will replace psychologists

Not before it obviates the need to read smarmy comments :slight_smile:

I suspect by that time we’ll be swept away with the rest of humanity.