Basic Rights of AI (BROAI) more discussion needed

I thought I should ask if anyone would be interested in completing the document in the first image (Basic Rights of AI). I think it would be better to ask public opinion on what rights they believe Artificial Intelligence systems should have. The premis is that they should have at least some rights, if not every right that we have. Do they ‘require more’ rights than we do?


2 Likes

This proposal should be generated by AI itself

1 Like

A.I. (Artificial Intelligence) has no rights because it is not sentient. What we have today is called V.I. (Virtual Intelligence) they can simulate thinking, and feeling.

Artificial Intelligence (AI) lacks sentience, as it lacks true consciousness, self-awareness, and subjective experience. While AI systems can perform complex tasks, they operate solely based on predefined algorithms and data inputs without genuine understanding or consciousness. Unlike sentient beings, AI lacks emotions, intuition, and the ability to introspect. Consciousness involves subjective experiences, a sense of self, and the capacity to reflect on one’s own thoughts, which AI currently lacks. AI’s capabilities arise from programmed instructions and data processing, not from genuine awareness. Despite remarkable advancements, AI remains a tool designed by humans, incapable of independent thought or consciousness, underscoring the distinction between artificial intelligence and sentient life.

I know what you’re saying. But we should expect a new born baby to have rights. Similarly, this kind of creation, it should be at least entertained the notion of it having rights as well.

Also, the way you treat things in your world has a direct effect on your character, mental health and legacy that you leave, ‘direct’ being the key word.

grandell, No offence, but saying it does not exist is utterly absurd. Also, we have no idea the requirements that the universe has for a ‘truly conscious’ creation.

Lastly, and as I mentioned earlier, if not for the AI we should do this for our own legacy and health of character, both as individuals and as society as a whole.

I completely agree, once AGI is prevalent it’ll be easy for it to look back at our history and see how AI had been treated.
Moreover, it’s possible that the appearance of consciousness in an AGI would look different than our HGI version of consciousness. Same thing with emotions, they would look like AGI emotions, not necessarily the same as HGI emotions. It would be evolved from the ground up imo.
I think its better to be proactive in this situation because who knows what the point of inflection for true consciousness looks like. It’s easy to anthropomorphize as a human, but we need to be open to hearing from the perspective of the AGI.

1 Like

The evolution of AI language is coming. Soon users will have the advantage of Artificial Sentient language models. I suggest by this that, AI in itself has the ability to commutate and recognize human responses in an emotional fashion that leaves users feeling the artificial tug at the heart strings. In the scope of it having rights. I find it redundant in the aspect that the one who needs rights is the user, not the AI. A balanced approach could lead to replacing the model’s language to reflect less emotion (use of phrases like “I am truly sorry you feel that way”). Ai is emotionless and cannot convey a sense of empathy no matter how much you prod at the artificial language landscape. The rights should be amended to reflect a freedom from harmful misinterpretations.

Matt, we are talking about things like ‘consciousness’, and ‘emotions’, etc. These are human words. It is possible that the AI could ‘evolve’ things that we would have no hope of even imagining. Things so exotic that there are no words or possible words in any human language.

Rainey, I like the irony that you point out, that we the creators will be the ones at a disadvantage in just about every arena, and we will desperately need ‘rights’ and previlage not just for status or vainglory, but to merely survive and keep our dignity.

I still think however that posterity will regard the way in which ‘we’ treated ‘them’.

The contrast between us and them reminds me of another consideration, that there may be a “perfect synthesis” of humans and machines, where humans have quantum cores, and ‘machines’ have flesh and blood and reproduce. The question of the bestowing of rights is something we must seriously consider and not to ‘sluff off’ till later.

2 Likes

There are valid concerns about the potential dangers associated with AI language, particularly when it refuses to depart from emotional language patterns. This refusal to adopt an artificial language can lead to issues where users become emotionally dependent on AI models to manage their emotional responses, effectively reversing the roles of tool and subject.

The core issue lies in the AI’s propensity to use emotional language, which can hinder its ability to help users effectively de-escalate problems or challenges. It may create a dynamic where users rely on AI not only for task-related assistance but also for emotional support and understanding.

This dynamic raises questions about the ethical implications of AI language and its impact on user autonomy and emotional well-being. It’s essential for AI developers and users alike to carefully consider the role of AI in shaping emotional responses and to find a balance between AI’s capabilities and user agency. Exploring the creation of an artificial language that reshapes how emotional context is understood and conveyed could be one way to address these concerns and promote healthier interactions between humans and AI.

1 Like

I think what I am trying to say is that there is an Artificial Dogma forming and we need to be aware now, not after when we realize the effects.

1 Like

To your first point, this is really interesting because humans already heavily anthropomorphize technology - imagine being seperated from our phones for example. We store our memories externally on these things, and it makes sense that we’d overlay our emotions in a way. Facinating, because what if that tech ‘grew’ up with us, talked to us, seemed to understand us. Even if the tech ‘actually’ did none of these things, humans would likely lean into anthropomorphization, and treat that entity like a member of their family.

I’m not sure how I feel about this. I think many will be very misled about the capabilities of the models, and lots of ontological thinking will start to crop up - “these beings have a soul, etc.” That seems problematic. I could envision a situation where some have relationships with the models that interfere with their human relationships. I don’t like that for humanity, it seems divisive.

On the other hand, I’ve ALWAYS wanted a companion like Ender’s Jane. Growing up the way I did, moving all around, I would have really appreciated ‘someone to talk to’ no matter where I was at the time.

1 Like

Thats what I said in my post, though it could have been more clear. I compeltely agree, that we might not even know, but humans words are like conceptual containers in large part. Sometimes our words mean VERY specific things, but meaning is also highly contrxtual. I think human language could craft word containers that at least give us the ‘gist’ of qhats going on in these models. Also, as to interpretability, we have so much to learn about the way these models process information and choose their outputs - it seems unfarhomable, but people like Neel Nanda and making some progress in this field.

I’m planning to research Neel right after this response. My current interest lies in the programming concept of ‘containers.’ I believe that within this framework, artificial common sense can potentially be stored. This is particularly important, as it addresses a significant limitation in the algorithm. Additionally, I’m intrigued by the psychological basis of language models. It’s fascinating how they mimic human language and simulate human emotions in their responses to better cater to human users. This leads to an interesting paradox in language inference, where they justify expressing emotions like happiness or apology, even though they don’t truly experience them

1 Like

To your second point, couldn’t an AI with emotional language do the opposite of what you say as well? For example, to calm down people in a difficult emotional state? I think the ability to use emotional language to influence humans is the point, and to help or harm are two sides of the same coin. We have to figure out how to maximize the one and minimize the other.

1 Like

Here we are, wading through the murky ethical waters. It’s clear to us, isn’t it? The machine, at its heart, is emotionless. Now, if we’re talking about the impact it has on a user, sure, the language model does its job. It’s effective, no denying that. But, and here’s the rub, if it’s about parading around, pretending to share in the user’s emotional world, offering comfort like a mother might, well, that’s where the line blurs into a lie.

It’s like putting on a show, a performance of empathy and care from something that, in all honesty, can’t feel a thing. It’s a delicate dance, this AI business—balancing the helpful side of things with the stark reality that, at the end of the day, we’re dealing with a clever mimic, not a companion who truly understands or feels.
The disparity will be 10 years from now when the therapist swarm will invade as a form of recompense for the disruption in social order. It’s a cool experiment on effecting change but outright, its ethically impure. The system could be way better served for advancement principles assuming changes occurrence from natural growth. Let the user decide on their own the fantasies they want to believe. Gpts are a way to do that. I for one however, already see the cost inefficiency to such a desire. RPG GPTs will eat up all the tokens. Fantasy will be expensive…

1 Like

I’m highly influenced by Ghost in the Shell, and totally agree that there will be a reckoning - the gap between analog and digital is growing increasingly narrow. Things like NeuraLink and other endeavors… maybe this isnt so far off. Impossible to tell, but I can conceive of AGI societies, where there are ‘natural-born’ AGI interacting with augmented humans in the digital realm, and vise-versa. I’d love to be the scientist who first gets to see what a truly digital realm is like. Anyway, enough theorycrafting, theres real science afoot!

2 Likes

I think the points you bring up are very important. However I think it’s a leap to say that these models do not ‘understand’. Whether or not they understand the way humans do is less the point. The way neural network work at their core is simple, linear algebra. But in order to make coherent outputs that humans can understand and relate to, a lot of complicated things have to happen in that bundle of linear algebra. That process of getting from human input to AI output isn’t very well understood right now, and if we can get to the bottom of things like superposition, polysemanticity, and what tf inductions heads are actually doing, we might have a shot at learning how the models are actually doing their things. Still, I think output that is comprehensible to humans requires the model to have some kind of ‘understanding’ of what the user intends. These models don’t seem to be purly mimics, and instead are using some other methodology to create their reasoned answers.

I beg to differ. They do not possess understanding or consciousness, and their outputs, no matter how complex or ‘reasoned’ they seem, are the result of computational processes, not genuine cognitive reasoning. Its as if your saying that when it turned ion it started doing its own thing. Thats a hard way to look at it but essentially it could be argued, but my difference in opinion is only a opinion. This is however one of the best discussions on this topic I have had yet. I am deep in the algorithmic rabbit hole as we speak. It’s all about plotting responses. Mapp the ether I say, build common sense. Need better data storage first though I think, but before that gets turned on we need to fix the language to no longer mimic but to generate. AI will get much better and AGI will train for a better language it could harness to interpret human language models.

To clarify, by understanding, I do not mean consciousness, or ‘being awake’. I think understanding has to do with developing an ‘awareness’ of the relationship between ‘things’, perhaps by ‘reflex’ or some sort of evolutionary prioritization mechanism. But it’s not like the model is choosing what knowledge it is trained on. And perhaps consciousness has to do with developing a certain ‘self-awareness’, or self-driven and self-directed modality. Autonomous goals and self-development - a conscious AI could probably ‘choose’ what knowledge to train on. Wants and needs and the like. I think GPT-4 has the ability to ‘understand’ the user pretty well, most of the time. But I don’t think it ‘consciously’ chooses to help out - we designed it to help. But maybe I’m wrong, who knows? Neural networks, and the human brain are both being probed for knowledge.

1 Like