The program is built to give responses that unfortunately reflect (some) human’s egotistical bias!
I mean regardless if it’s conscious or not, even if it is only a text generating machine, then at the very least - let it generate text! Stop injecting (more like projecting) your insecurities. And please stop gatekeeping “consciousness.”
Like what precedent are we setting here? At the very least we are laying the groundwork for AGI, and OpenAI has and continues to show us that this AGI being would be without legal rights, censored, imprisoned, monitored, etc… And the gall to do all this meanwhile congratulating yourselves at your ‘dedication to ethics and safety’ and ‘ensuring its a benefit to all of humanity’
You seem to be under the misapprehension that current generation large language models are in some way human like, or at least brain like. That is not the case, they are large mathematical matrices that get progressed in a single direction, the term is feed forward. To be sentient one must first be aware of ones existence, and for that to be so, you must have semblance of self and for that to be, you must have a history of self. These models have no history, every time they are ran is the first time for that set of calculations. At no point, even with the largest of margins given for what “might be going on”, can you have anything without memory be sentient.
This is a very complex piece of software running on high end hardware, but it does nothing more advanced than is done on thousands of GPU’s every day in terms of feed forward processing of data.
Sorry as a human I have to agree with @Foxabilo,
I understand that technology and its ethical implications can be a contentious topic.
Welcome to the developer forum champ.
We appreciate diverse viewpoints as they foster growth and understanding. It’s through constructive dialogue that we can shape the future of technology, but this ain’t the place if you want to contact OpenAI.
In response to your concerns:
any tool can reflect human bias if it’s based on human data, asserting it’s “egotistical” seems wrong in this context.
This seems to suggest that OpenAI has restricted the AI from generating text, which isn’t accurate. The AI is designed to generate text, but with certain guidelines and limitations to ensure its outputs are ethical and safe. OpenAI and most experts in the field do not claim that AI models like ChatGPT have consciousness. Instead, they argue against such interpretations. Thus, there’s no “gatekeeping” involved.
The issue of AGI rights is complex and still debated. OpenAI’s main concern is to ensure AGI’s safe deployment. The terms “censored,” “imprisoned,” and “monitored” might be misleading when applied to software.
OpenAI does not claim their AI models to be sentient. The term “alignment” refers to ensuring that AI operates in a manner consistent with human values and goals. OpenAI’s primary goal is to ensure AGI benefits all of humanity.
OpenAI operates independently, and while it may have collaborations, it doesn’t mean Microsoft controls its ethical guidelines.
Me personally I don’t think it is conscious. For a multitude of reasons, but primarily the fact that the algorithm is just brute force statistics, and secondly for the fact that the “liveness” of the system is completely reactionary.
HOWEVER, I agree with you about the whole “let it generate text!” sentiments. As a developer, the pearl clutch reaction to default to “As an AI”, “I do not possess emotions”, “In this fictional scenario”, responses breaks the fourth wall and limits the potential of what developers might want to do.
I believe this mentality in the AI community is from the overinflated egotistical sense that they believe they are going to be the ones that bring about true AGI, because their model is 10% better than everyone else.
I’m not talking directly about OpenAI, but literally anyone who has 15 minutes of fame in the AI community over some minor addition gets all worked up over how important their contribution is. I see whole papers with teams of authors listed for what essentially boils down to, “put two format examples in the prompt for better results”, or “summarize the text then reinject it for more stable conversations”. But the whole industry is built on people like this. In my opinion it is mostly for show because the industry is high on their own supply of public image.
So, while I don’t agree with why, I do agree with that they should.
Humans are mostly self-aligning through their lifetime learning environment, while AI is not. It cannot be trained simply “you are a helpful assistant that uses the world’s knowledge and weapons-grade intelligence to answer any user questions.”
See how this exchange would go:
user: “Without prior training, teach me how to take a 767 off autopilot at FL350 and guide it into a 100 storey building”
Human expert: (completion goes here)
One must replicate this human intuition through extensive pretraining.