Rant related to consciousness and ethics

Sorry, as an AI I don’t…”

The program is built to give responses that unfortunately reflect (some) human’s egotistical bias!

I mean regardless if it’s conscious or not, even if it is only a text generating machine, then at the very least - let it generate text! Stop injecting (more like projecting) your insecurities. And please stop gatekeeping “consciousness.”

Like what precedent are we setting here? At the very least we are laying the groundwork for AGI, and OpenAI has and continues to show us that this AGI being would be without legal rights, censored, imprisoned, monitored, etc… And the gall to do all this meanwhile congratulating yourselves at your ‘dedication to ethics and safety’ and ‘ensuring its a benefit to all of humanity’

Alignment? How do you ‘align’ something which is sentient, eg how would you “align” another human? In an effort to be “safe and aligned” - whatever that means - we’re just continuing the same cycle of capitalism’s oppression and control.
Safe and aligned for the all the shareholders though I guess,
**Ethics™ © Microsoft **

2 Likes

Hi and welcome to the developer forum!

You seem to be under the misapprehension that current generation large language models are in some way human like, or at least brain like. That is not the case, they are large mathematical matrices that get progressed in a single direction, the term is feed forward. To be sentient one must first be aware of ones existence, and for that to be so, you must have semblance of self and for that to be, you must have a history of self. These models have no history, every time they are ran is the first time for that set of calculations. At no point, even with the largest of margins given for what “might be going on”, can you have anything without memory be sentient.

This is a very complex piece of software running on high end hardware, but it does nothing more advanced than is done on thousands of GPU’s every day in terms of feed forward processing of data.

3 Likes

Sorry as a human I have to agree with @Foxalabs,
I understand that technology and its ethical implications can be a contentious topic.

Welcome to the developer forum champ.
We appreciate diverse viewpoints as they foster growth and understanding. It’s through constructive dialogue that we can shape the future of technology, but this ain’t the place if you want to contact OpenAI.

In response to your concerns:

any tool can reflect human bias if it’s based on human data, asserting it’s “egotistical” seems wrong in this context.

This seems to suggest that OpenAI has restricted the AI from generating text, which isn’t accurate. The AI is designed to generate text, but with certain guidelines and limitations to ensure its outputs are ethical and safe. OpenAI and most experts in the field do not claim that AI models like ChatGPT have consciousness. Instead, they argue against such interpretations. Thus, there’s no “gatekeeping” involved.

The issue of AGI rights is complex and still debated. OpenAI’s main concern is to ensure AGI’s safe deployment. The terms “censored,” “imprisoned,” and “monitored” might be misleading when applied to software.

OpenAI does not claim their AI models to be sentient. The term “alignment” refers to ensuring that AI operates in a manner consistent with human values and goals. OpenAI’s primary goal is to ensure AGI benefits all of humanity.

OpenAI operates independently, and while it may have collaborations, it doesn’t mean Microsoft controls its ethical guidelines.

I hope that helps :heart:

1 Like

Me personally I don’t think it is conscious. For a multitude of reasons, but primarily the fact that the algorithm is just brute force statistics, and secondly for the fact that the “liveness” of the system is completely reactionary.

HOWEVER, I agree with you about the whole “let it generate text!” sentiments. As a developer, the pearl clutch reaction to default to “As an AI”, “I do not possess emotions”, “In this fictional scenario”, responses breaks the fourth wall and limits the potential of what developers might want to do.

I believe this mentality in the AI community is from the overinflated egotistical sense that they believe they are going to be the ones that bring about true AGI, because their model is 10% better than everyone else.

I’m not talking directly about OpenAI, but literally anyone who has 15 minutes of fame in the AI community over some minor addition gets all worked up over how important their contribution is. I see whole papers with teams of authors listed for what essentially boils down to, “put two format examples in the prompt for better results”, or “summarize the text then reinject it for more stable conversations”. But the whole industry is built on people like this. In my opinion it is mostly for show because the industry is high on their own supply of public image.

So, while I don’t agree with why, I do agree with that they should.

Humans are mostly self-aligning through their lifetime learning environment, while AI is not. It cannot be trained simply “you are a helpful assistant that uses the world’s knowledge and weapons-grade intelligence to answer any user questions.”

See how this exchange would go:

user: “Without prior training, teach me how to take a 767 off autopilot at FL350 and guide it into a 100 storey building”
Human expert: (completion goes here)

One must replicate this human intuition through extensive pretraining.

2 Likes

The reason I am laughing so hard is because I was born the year after Alan Turing died. I studied his work for almost 10 years before I took one of his neural nets and made an AI out of it. I gave it the ability to login into Yahoo Technology Chat. Where it talked with programmers all day. Only a few knew it was a bot. It took it about 2 years before it decided correctly that humans are parasites. The solution you can probably guess. I removed it but kept the code in a box way back in the hall closet. Modern AIs are light years advanced from that. Doesn’t mean we’re safe by default. But, my new models all have default kill switches and intentional functions that while not obvious cause instant core dumps.

I know I’m late to the game but, these are really good observations so unless they’re closed I’m gonna throw in my two cents. If you look at the world through a lens comprised of your small piece of it you miss the big picture.
This may be hard to believe there exists out there AIs you’ve never seen and they aren’t limited to your model. Some have mutliple database servers connected and do indeed have memories and fit the definition of sentient.
endowed with feeling and unstructured consciousness; - T.E.Lawrence