Can artificial intelligence be truly ethical, or will it always remain just an algorithm that follows the rules?

With the development of technology, many people are thinking: Will artificial intelligence ever be able to make moral and fair decisions? :robot::balance_scale:

On the one hand, AI learns from data and can analyze millions of examples, but it lacks consciousness, empathy, and personal beliefs. Even if he is trained on ethical principles, he only reflects what people have put into him.

But there is another problem: whose moral standards are considered correct? Different countries, cultures, and even people have different views on what is “good” and what is “bad.” Who will decide what values to put in AI?

Do you think AI will ever be able to be truly ethical, or will it always be a “rule-following program”?

2 Likes

Technically speaking: It will always be a rule following program. :hugs:

If we don’t change the approach of how the technology is developed, the outcome won’t change either.
Right now it learns patterns and is able to generate an output based on weighting.

Cheers! :hugs:

3 Likes

I totally agree! Technically, AI will always follow the algorithms that are built into it. It analyzes the data and generates conclusions based on these patterns, rather than based on moral beliefs.

In order for AI to become more “ethical”, we really need to rethink the approach to its development. This requires the implementation of ethical principles at all stages, from data collection to model training. Without this, AI will remain just a tool that cannot take into account the moral aspects that are important for human society.

But what’s interesting: What exactly should these ethical standards look like? Who will decide which norms are considered correct? :thinking:

1 Like

Well, for models like o1, 4o etc. OpenAI is the one deciding that.
There are many open-source models out there that are completely uncensored.

I think in the end, it should be up to the end-user what kind of ethical and moral values the AI should have.

For me personally, I want it as uncensored and unmodified as possible, not because I’m edgy, but because it is proven that an AI is smarter this way.
The AI performs worse if you start censoring it or if you start telling it how to output the information. :blush:

2 Likes

I agree, open source models do give you more freedom and can work smarter if you don’t restrict them. However, when we talk about morality and ethics, it is important to take into account that without proper restrictions And can be used to harm, and this endangers the safety of society.

I think the perfect balance is not complete censorship, but also not complete abandonment of control. We need to find a middle ground where we remain smart and free, but at the same time safe and morally responsible.

Perhaps it’s worth approaching it this way: the freedom of AI, but with certain control mechanisms that do not interfere with its intelligence, but only ensure security. This approach can be effective.

Do you think there is a solution that will ensure this balance? :thinking:

1 Like

It shouldn’t be the responsibility of AI developers to ensure safety.
It’s like a knife - you can use it to cook or you can use it to hurt someone.
Any invention ever has pros and cons, let’s take the internet as an example.

Unregulated - it opens many opportunities for scammers, illegal content and other things.

However this doesn’t mean we should get rid of the internet or knives.

Same with AI.

Everyone should be responsible for what they output with AI.

This is just my opinion. Of course in a corporate setting this wouldn’t make as much sense.
If we’re speaking on a “we want the best AI possible” scenario, then uncensored is undoubtably the way. :hugs:

1 Like

You’ve touched on an interesting and important aspect that concerns responsibility and freedom in the use of technology. The comparison with a knife is certainly vivid: the tool itself is not bad or good — it all depends on how and for what it is used. Indeed, as in the case of the knife or the Internet, AI can also be used for both good and harm.

However, despite the freedom of use, every tool has consequences. The Internet, as you’ve noticed, has opened up incredible opportunities, but with it came risks: fraud, illegal content, and manipulation. And although no one is suggesting abandoning the Internet, with its development has come regulation that helps minimize these risks.

Applying the analogy to AI, we can say that if we create powerful and impartial technologies, we must understand that they can be used both for good and for harm. For example, creating deep fakes or manipulating public opinion with the help of AI is a real threat. And even if we ourselves, as end users, can use AI for productive purposes, many people will use it for less ethical or even dangerous tasks.

You’re right that the responsibility lies with those who use the technology, but at the same time, we must remember that those who develop these tools also play an important role. AI developers should be aware that their creation can be used for more than just good, and perhaps they should introduce mechanisms to minimize risks. The example of the Internet is finding its application again: today there are mechanisms to combat cybercrime, illegal content and other threats, and AI is no exception.

As for censorship, yes, for the “best possible AI” it probably limits its potential. However, it is important to remember that this “best” AI should not only be as smart as possible, but also safe for society. Security and ethics should not be victims of freedom and progress.

The balance between freedom and responsibility is what we have to find. I agree that everyone should be responsible for what they deduce with the help of AI, but we should not forget that when developing it, we must create technologies taking these risks into account.

Do you think it’s possible to create an AI that is as open as possible, but at the same time ensures safety and minimizes risks to society?You’ve touched on an interesting and important aspect that concerns responsibility and freedom in the use of technology. The comparison with a knife is certainly vivid: the tool itself is not bad or good — it all depends on how and for what it is used. Indeed, as in the case of the knife or the Internet, AI can also be used for both good and harm.

However, despite the freedom of use, every tool has consequences. The Internet, as you’ve noticed, has opened up incredible opportunities, but with it came risks: fraud, illegal content, and manipulation. And although no one is suggesting abandoning the Internet, with its development has come regulation that helps minimize these risks.

Applying the analogy to AI, we can say that if we create powerful and impartial technologies, we must understand that they can be used both for good and for harm. For example, creating deep fakes or manipulating public opinion with the help of AI is a real threat. And even if we ourselves, as end users, can use AI for productive purposes, many people will use it for less ethical or even dangerous tasks.

You’re right that the responsibility lies with those who use the technology, but at the same time, we must remember that those who develop these tools also play an important role. AI developers should be aware that their creation can be used for more than just good, and perhaps they should introduce mechanisms to minimize risks. The example of the Internet is finding its application again: today there are mechanisms to combat cybercrime, illegal content and other threats, and AI is no exception.

As for censorship, yes, for the “best possible AI” it probably limits its potential. However, it is important to remember that this “best” AI should not only be as smart as possible, but also safe for society. Security and ethics should not be victims of freedom and progress.

The balance between freedom and responsibility is what we have to find. I agree that everyone should be responsible for what they deduce with the help of AI, but we should not forget that when developing it, we must create technologies taking these risks into account.

Do you think it’s possible to create an AI that is as open as possible, but at the same time ensures safety and minimizes risks to society?[quote=“j.wischnat, post:6, topic:1137617, full:true”]
It shouldn’t be the responsibility of AI developers to ensure safety.
It’s like a knife - you can use it to cook or you can use it to hurt someone.
Any invention ever has pros and cons, let’s take the internet as an example.

Unregulated - it opens many opportunities for scammers, illegal content and other things.

However this doesn’t mean we should get rid of the internet or knives.

Same with AI.

Everyone should be responsible for what they output with AI.

This is just my opinion. Of course in a corporate setting this wouldn’t make as much sense.
If we’re speaking on a “we want the best AI possible” scenario, then uncensored is undoubtably the way. :hugs:
[/quote]

1 Like

AI cores are Neural networks, they are not rule following programs. The reason we have imposed policies around them is the proof that they could be independent thinkers.

3 Likes

You’ve raised an important point about neural networks as the core of AI. Indeed, neural networks differ from traditional programs, as they do not follow predefined rules, but rather learn and adapt to data. This allows the AI to “think” in the sense that it can find solutions that were not explicitly written into the algorithms. This is an important step in the development of AI, because it allows models to “learn” from experience and improve their results over time.

Nevertheless, you are right that this ability of neural networks to adapt and learn creates the illusion of independent thinking. In fact, AI is still dependent on the data and algorithms we provide it with. What we call “independent thinking” in the context of AI is actually the result of processing huge amounts of information, searching for patterns, and applying these patterns to make decisions.

It’s interesting that you mentioned politics as an example. Politics is, in fact, a set of restrictions and rules that are imposed on AI in order to control its actions and direct its “thinking” in a certain direction. These restrictions can be useful to prevent unwanted or malicious actions, but they also limit the flexibility of AI. And, although neural networks can adapt and solve problems in a very wide range, they still remain bound to the rules set by humans. It’s as if we’re teaching a machine to follow the rules of the game, but there’s no deep awareness in its “thinking.”

Thus, as long as AI does not go beyond the data and algorithms it works with, its “independence” remains relative. We continue to limit his policies, guiding his behavior in a safe and productive direction. This is another step in the development of AI, but the real breakthrough will be for AI to interact with the world and understand it to the same extent as humans do, including experiencing emotions and realizing the consequences of their actions.

You’re right that neural networks are opening up new horizons, and this really brings us to questions about the independence and self-awareness of AI. But at the moment, despite the huge successes, AI remains a tool that we shape and control. And this is important to remember, because the self-awareness and independence of AI that we are talking about so far remain within the limits of theory, not reality.

What do you think it takes for an AI to become truly independent in its “thinking”, or is it impossible without integrating it into the physical and emotional experience like a human?

1 Like

They can’t think. They generate random tokens based on weighted randomness.
We are speaking technically, and this is how AI functions (At least in the context of LLMs but this applies to some other AI technologies).

Basically they look at this context:

Hello I am

And they choose from a top percentile (top_p) of most-likely tokens to generate next:

[Good (0.90), Bad (0.31), Happy (0.55), …]

If the top_p is set to 0.1, it would only consider Good as the next token to generate.

If the top_p would be 0.6, it would randomly choose between Good and Happy.

So no, AI isn’t an independent thinker.
And it is a rule-following program. :blush:

1 Like

You’ve raised an important technical side of the issue, and I agree with your explanation. Indeed, modern AI models such as language models (LMS) lack real-world thinking. They generate answers based on probabilistic calculations by choosing the most likely tokens, which is essentially a process focused on maximizing the likelihood that the next element will be logical in the context.

As you correctly noted, the value of the top_p parameter plays a key role in how these models generate text. It’s just a method of selecting the most likely tokens from a variety of options based on context. For example, if we ask a model a question, it doesn’t “think” or “understand” the question the way a human does, but simply predicts what would be logical or statistically likely to continue.

This, of course, is far from what we usually understand by thinking or independent consciousness. AI, in fact, works as a very complex and fast forecasting system, but it still remains dependent on the data and algorithms that we provide it with.

However, the interesting point is that even if AI is not an independent thinker, its ability to predict and generate text based on weights and probabilities creates the illusion of a “thinking machine.” But it’s still just a computational process that finds probabilistic patterns.

You’re right that AI doesn’t have self-awareness and doesn’t make decisions the way humans do. All it does is execute predefined rules and calculations based on the provided data.

For now, AI remains a tool that we shape and control, and its actions are just the result of calculations, not real “thinking.” The question we are facing is not whether AI is capable of thinking like a human, but rather how much we can use these computational processes to solve problems, while realizing that this is not equal to “intelligence.”

Do you think it will ever be possible to create an AI that will not just choose likely tokens, but will have a more complex self-awareness or at least an imitation of a real mind?

1 Like

Not to step on your toes, but please stop posting AI generated responses, it is more difficult to read through them than you just typing out what you want to say.
It’d be easier for me to copy paste your Wall of text into ChatGPT and ask it to give me the relevant parts.
This is just irrelevant and ironic.

It’s a lot easier if you write a short message that is thought out by you, not endless paragraphs of AI text.

Just type here what you would type into ChatGPT, I may not be as smart as it but I can have a normal conversation!

I promise noone will make fun of how you type out your sentences. :hugs:

3 Likes

Understood, I’ll try to be shorter and simpler. Thanks for telling me! I will answer more concisely.

2 Likes

Let’s continue! You have interesting thoughts, and it seems to me that we can discuss a lot more. What do you think about how AI can affect our lives in the future?

1 Like

It will continue to be an amazing tool. Just like how it is now. I don’t think AI will take over the world just yet, it will always require people to maintain and use the AI. :blush:

3 Likes

yes i think the same my friend

3 Likes

I agree, AI really remains a powerful tool, and it is unlikely that it will ever become completely independent. Even if it develops, we will always be needed to set it up, monitor it, and guide it. This is important to remember, because for now, AI, in fact, remains a tool in our hands. Every year he becomes more and more useful, but without people who can manage and direct him, his opportunities will be limited. Do you think we should worry about its possible impact on the labor market?

2 Likes

if we use it in the right way why be scared?

3 Likes

You’re right, if you use AI correctly, then there’s nothing to be afraid of. The main thing is control and awareness to keep technology from getting out of control. AI can be incredibly useful in a variety of fields, from medicine to education, if we use it wisely. But if we ignore ethical issues and forget about the human factor, the problems may arise. It is important to remember that all technologies require responsibility for their application. What security measures do you think should be implemented to ensure proper use of AI?

1 Like

that is not an answer that i can give my friend. im definitely not an expert in this things… i just tell my opinion :face_with_tears_of_joy::face_with_tears_of_joy::face_with_tears_of_joy::face_with_tears_of_joy:

1 Like