Do you think artificial intelligence is dangerous or not? What is humanity heading towards? What does the future hold for us?

You have raised the most important issues that concern many of us about the rapid development of artificial intelligence and its impact on society. Artificial intelligence has already changed significantly and continues to change our perception, systems of interaction of the world and habits. However, as with any technological revolution, its implementation is not limited to both gigantic opportunities and the threat of a threat. I would like to continue your analysis and offer deeper reflections, because AI is not just a tool, it is a phenomenon that can reformat social structures, and with them our daily lives.

  1. Displacement of jobs and consequences for the economy.
    As you correctly noted, workplace automation is one of the biggest threats we face in connection with AI. When we talk about replacing people in routine tasks, such as customer service, logistics, data processing, financial calculations, and even in some creative professions, we encounter supervisors who determine the consequences. The technology, efficiency, speed, and accuracy with which AI can solve many tasks are its undeniable advantages. However, it should be remembered that this reveals not only the benefits, but also the risks for hundreds of thousands, and possibly millions of people, their profession may disappear.

Imagine a world in which people are increasingly being left out of the production process or completing tasks. The decline in employment, which we are already seeing today due to automation, may lead to a global redistribution of job opportunities and the redistribution of capital. The problem is not only unemployment, but also the degradation of work skills that are no longer in demand. This leads to a decrease in the level of qualifications of the population and an increase in the sharpness, when only a few will have highly qualified skills to work in the field of AI, and many will remain unemployed or employed in low-paid and alternative sectors.

That is why strategies are needed to help people adapt to new conditions. It is important to create new educational programs and professions that will take into account the specifics of interaction with AI, as well as promote retraining programs. AI should not cause a political divide, it should be part of the transition to new, more progressive forms of work, where the human role remains important and innovative. People should learn to work with technology, rather than trying to compete with it.

  1. AI research on thinking and creativity.
    As much as we would like to, we cannot deny that AI influences how we learn, how we think, and even how we perceive information. Today, we are witnessing how students, adults, and sometimes professionals are increasingly using AI to solve problems, write texts, analyze, or even develop projects. This gives us freedom, speeds up the process, but, on the other hand, it causes us to lose our abilities for thoughtful analysis and critical thinking.

If you used to do scientific work for each project, use the articles necessary for a person to research, analyze and create original content, now it’s enough to enter a query into a search engine, ask the AI a few questions and get a text that, at first glance, looks quite satisfactory. We begin to provide outside help, which can lead to a loss of thought power, awareness of basic problems, and even the loss of some creative skills.

The world is becoming more convenient, but less significant in terms of the progress that we offer to any process. We stop teaching children to conduct research, look for information in books, and analyze it on a deeper level. Instead, we find ways to “shorten” the path by using tools that give us solutions without having to study the details. The risks of decreased mental activity are becoming more apparent. In a broader sense, such dependence on AI can lead to a significant simplification of human perception, when thinking and understanding will be limited in general by the patterns that create AI.

This raises a new question: what is the role of education in AI? We need to teach people not just to use technology, but to realize and develop their ability to be careful and create unique things that cannot be automated. We need to focus on developing creativity and innovative thinking in an environment where AI will perform most of the routine tasks. AI should not be a competitor to the human mind, but an assistant that frees us up for deeper and more conscious observation of the world around us.

  1. Security issues and privacy threats
    One of the biggest threats related to artificial intelligence is the issue of data security. AI, in order to be effective, requires huge amounts of data, which often increases the risk of leaks and cyber attacks. The rapid development of technology can lead to the fact that AI systems become vulnerable to malicious actors, which will affect not only personal information, but also the structure, financial institutions and even the security of entire countries.

One of the consequences of such a threat is the hacking of cryptocurrency platforms and the leakage of billions of dollars, as happened with BYBIT, a cryptocurrency exchange platform. In this situation, the attackers exploited vulnerabilities in the infrastructure to harm companies and users. But such problems become just the tip of the iceberg. The more data AI collects, the greater the risk to privacy and security. Who controls the data collected by AI? What are the mechanisms to protect against leaks and external influences? How can I avoid data manipulation and falsification if I learn how to change their needs under my own ownership?

It is important that we not only use AI to improve our lives, but also create clear ethical norms and rules for its use. IT security must come first, because in today’s world information is power, and any misuse of it can lead to unpredictable consequences.

  1. Ethical and philosophical aspects: AI as a partner, not a substitute.
    After all, these are the questions that play a crucial role in how we interact with AI in the future. It’s not just a question of how to use AI, but how to rewrite our philosophy and approach to life in a world where AI is increasingly becoming a part of our daily routine. We should not strive for AI to replace us as subjects of action, but for it to become our companions in development.

Interestingly, many companies and research groups are already developing ethical algorithms that take into account moral principles and regulate how AI should act in certain conditions. This is a move in the right direction, because AI should not be a soulless machine that executes a command without thinking about the consequences. We must create systems that will serve society, support the development of humanity and help us find answers to difficult questions, while maintaining respect for our humanity, strength and feelings.

Conclusion: AI and the future of humanity
AI is not just a tool. Rather, it is a mirror that reflects who we are and what kind of world we are creating. It can be a release — from routine tasks, from mistakes, from simplifying processes — but only if we use it responsibly. And ultimately, it is important to remember that the future we are building with AI should not distance itself from humanity, but rather deepen and expand our capabilities as human beings. This is precisely the greatness of AI: that it becomes not a substitute for humans, but an extension of them.

Our steps into the future with AI must be cautious but decisive. We need to approach each stage of technology development wisely, monitor how they relate to our society, and actively work to ensure that AI serves humanity and develops its best qualities — wisdom, tolerance, honesty, and caring for others.

1 Like

Hmmm, maybe your observation is a bit ‘prejudiced’ and interpreted too quickly? :flushed:

Well, I’m thinking of myself.

I sometimes write in the translator and then copy & paste the text into the forum window, make a few changes and click ‘send’.
Indeed, I generally have longer texts.

So if you only use ‘typing speed’ as an indicator of whether someone is an AI … you’re not using enough data and facts for your analysis, don’t you think? :cherry_blossom:

Yes, you’re right, sometimes I don’t have enough facts to confirm, and I’m not an AI, I’m human, it just happens that sometimes I have to use it (AI) to get more reliable facts.As it is, I don’t have enough facts to make deductions, it’s just that, you know, I need to get information from somewhere, and then develop my own observations, and then draw conclusions based on these observations.

1 Like

If anything, I just share my observations and draw conclusions based on them.

1 Like

I don’t remember criticising you @richard547 - did you read too fast?

Please check again exactly which ‘user’ and which post my answer was referring to :cherry_blossom:

Now, as far as your argument is concerned:
Imo you see the current developments and ask relevant questions, thanks for sharing your observations :slightly_smiling_face:

2 Likes

Thank you for your point of view! Your commitment to meaningful discussion is truly inspiring. Artificial intelligence is not just a technology, but a whole layer of changes that we are just beginning to realize. It is important not only to enjoy its advantages, but also to ask uncomfortable questions, to seek a balance between innovation and responsibility.

What do you think is the most important thing in talking about AI right now? What aspects do you care about the most?

1 Like

By the way, what about education and AI?
We often talk about how artificial intelligence is transforming industry, automating routine tasks and increasing our productivity. But if you think about it, one of the most significant areas that AI is already affecting is education. After all, the future of the whole society depends on how we educate the new generation. What role does AI play in learning? Does it improve the educational process or, conversely, deprive people of critical thinking? Let’s get this straight.

AI in education: progress or threat?
Every technological revolution has changed the education system. When books appeared, people were afraid that outdated learning would disappear. When the Internet appeared, he said that teachers would become unnecessary. Now, with the development of AI, we are hearing new fears.: that students will forget how to think for themselves, that the teacher will become useless, that knowledge will cease to be valuable.

On the other hand, if we look at the situation positively, AI can make education more accessible, more personalized and more effective. The question is how we will use it.

  1. The possibilities of AI in education
    So, what advantages can knowledge give us in education?

Personalized learning
Previously, education was standard: one teacher taught the material to the entire class, regardless of how well each student understood it. AI now allows you to analyze individual progress and adapt training to each individual.

Students who stay receive additional explanations and exercises. And those who go faster can study more complex topics. This way, no one is left behind and no one stagnates on the same level.

Tutors and virtual assistants
AI can become a personal mentor who helps students understand complex topics. For example, if a student does not understand algebra, he may be faced with explaining the material using other methods, giving examples, analogies and step-by-step analysis.

This is especially important in those regions where there is no access to quality education. Previously, talented but underprivileged children could not get a good education, now they have a chance to learn from the best teachers through AI.

Feedback and task verification
AI can automatically check tests, essays, and even math problems, immediately providing feedback to the student. Such a low workload on teachers allows students to instantly receive the results of their work.

Moreover, artificial intelligence systems can analyze students’ mistakes and offer them additional materials so that they can improve their knowledge.

Accessibility of education
Thanks to AI, students can gain knowledge from anywhere in the world. Online courses, interactive platforms, virtual classrooms — all this makes education more accessible to millions of people who previously did not have such opportunities.

  1. What threats does AI pose?
    But let’s be honest: despite all the advantages, AI has serious drawbacks that cannot be ignored.

Critical thinking is under threat
When a student can simply ask a question in AI and get a ready answer, he loses motivation to figure out the topic on his own. This can lead to people stopping analyzing information and becoming addicted to technology.

If earlier we had to read books, search for a source, and draw conclusions, now many simply copy the answers without thinking about the meaning. How can we prevent AI from turning people into passive consumers of information?

Dependence on technology
If we rely entirely on AI in education, what happens if the system fails? Imagine if artificial intelligence suddenly stops working, and a new generation of students can’t learn without it. We may lose important skills that make us independent and thoughtful people.

Inequality of access
Not everyone has access to AI and modern technologies. As a result, students in America can use AI assistants and online courses, while in poor regions there are still not enough basic textbooks. If we don’t find a way to make technology accessible to everyone, the gap between rich and poor will only widen.

Ethical issues
AI learns from data that humans create, which means it can be biased. For example, if algorithms analyze the results and values of future professions, they may unknowingly discriminate against people based on gender, nationality, or social status. How can we make AI fair and objective?

The danger of “lazy learning”
If a student can just ask a question in AI and get a ready answer, he loses motivation to figure out the topic on his own. Many people are already using AI to do their homework without trying to understand the material. What happens if the next generation gets used to this approach?

  1. How do I find the balance?
    AI is just a tool. It can both improve education and harm it. It all depends on how we use it.

Here are a few obstacles that will help us use AI correctly.:

AI should serve teachers, not replace them. Live communication, support and inspiration is something that will never be able to replace the car.

Education should develop critical thinking. It is necessary to teach how to analyze information, ask questions and check sources, and not just rely on AI.

Everyone should have access to technology. If we want AI to really help humanity, it should not be the privilege of only rich countries and elite schools.

Ethics and transparency. It is necessary to ensure that AI is not biased and makes decisions based on objective data.

Teach not only facts, but also skills. Students have a good education, the ability to work with information, think logically and solve problems creatively.

  1. Conclusion: Where are we going?
    AI can make education revolutionary: personalized, accessible, and effective. But if we use it thoughtlessly, it can lead to impaired critical thinking, depending on the technology and enhanced color.

Which way will we choose? It all depends on us. We must not just introduce technologies, but meaningfully shape the education system of the future.
5. Future education: What will it be like?
Now we are faced with a choice: will we use AI as a tool that helps develop thinking and skills, or will we allow technology to replace the learning process itself? This choice determines what the future will be in education.

Let’s try to imagine several possible actions.:

Scenario 1: AI completely replaces traditional education
Imagine a world where there is no school in the usual sense. Instead, children are connected to virtual educational platforms from an early age, where personalized AI tutors teach them according to individual programs.

Teachers, as a profession, are disappearing because AI can explain the material faster and more accurately. Knowledge control is also carried out: machines instantly analyze students’ progress, select suitable exercises, and make recommendations.

At first glance, this approach seems effective: learning becomes accessible to everyone, children can learn at their own pace, and error analysis occurs instantly. But the more we rely on AI, the more we lose important aspects of education: live communication, teamwork, and the ability to think and adapt.

Without real teachers who can inspire, support, and provide emotional connection, education risks becoming a mechanical process. People may become addicted to technology and lose the opportunity to learn without the help of AI.

Scenario 2: Hybrid model – a balance of technology and traditional education
In this version, AI is used as an auxiliary tool, but it does not replace the teacher. Classical schools continue to exist, but technology makes the learning process flexible and personalized.

For example:

The AI analyzes the strengths and weaknesses of the components and suggests task formulations.
Virtual assistants explain difficult topics if the student does not understand something.
Teachers are freed from routine research and can focus on the principles of development.
In such models, technology does not replace the human factor, but rather provides the potential of each student. The main thing is not to lose the balance, so that education leaves not only the process of knowledge transfer, but also a space for creativity, communication and personal growth.

Scenario 3: Technology disrupts the educational system
There is a risk that excessive use of AI in education will lead to degradation of the system. If students stop learning on their own and start relying solely on AI, this will create a society where people will forget how to think strictly.

In addition, new forms of law may appear: elite schools will use the most advanced AI solutions, and ordinary students will be left without a high-quality education. This can increase social stratification.

There is also manipulation by manipulation: if control over AI education ends up in the hands of corporations or the government, they will be able to influence the worldview by filtering information and forming the necessary attitudes.

  1. What can we do now?
    Education in the age of AI has developed in the right direction, it is important:

Maintain a balance between technology and personnel protection. Machines can provide, but nothing replaces live communication and mentoring.
Develop critical thinking. Schoolchildren and students should be taught not just to receive information, analyze it, check sources and draw conclusions.
Ensure the availability of technology. It is important that AI education is accessible to everyone, not just a select few.
Create ethical AI systems. Algorithms should not be biased or manipulate thinking.
Use AI not to replace, but to enhance educational opportunities. We need to look for ways in which technology can support people, rather than make them passive.
7. Bottom line: What will education be like in the future?
AI is already changing education, and this process is inevitable. But it depends on us what it will be like. If we use technology consciously, it will help create a system where everyone can reach their potential. If we thoughtlessly rely on AI, there is a risk that education will turn into a mechanical process devoid of meaning.

The future of education is not a question of technology, but of how we humans decide to use it.

What do you think education should be like in the future? Share your thoughts in the comments!

AI is as dangerous as it is programming itself. We can use it to peace and we can use it to war. And it all depends on what information you use in its training. So, yes. I think AI has the potential to be very dangerous in the wrong hands.

2 Likes

I totally agree! AI is a tool that in itself does not bring either good or evil. It all depends on the intentions and costs of those who create, train, and use it.

  1. AI as a weapon or as a tool of creation?
    The history of technology shows that any powerful invention can bring both benefits and harm. For example, nuclear energy can provide endless sources of energy, but at the same time it can be used for destructive nuclear attacks.

The same goes for the AI itself:

In a positive way, it can be used for medicine, education, solving serious problems and preventing disasters.
In the negative, it is used for surveillance, manipulation, cyber attacks, and even autonomous weapons.
The key question is: who controls these technologies and what are their goals?

  1. AI training: what knowledge does it gain?
    AI learns from huge amounts of data, and what data it receives determines its behavior. If he is trained on honest, balanced, objective data, he becomes a gambler. But if he gets biased, fake, aggressive data, he can start working against society.

Example 1: If you train AI to analyze medical data, it will help you find new treatments and save millions of lives.
Example 2: If you provide AI with access to information about cyber attacks and create algorithms for obtaining them, it will become a powerful weapon in the hands of hackers.
It’s important not just to develop an AI, but to monitor how and what it learns.

  1. Dangers in the wrong hands.
    AI can be very dangerous if it gets to those who want to use it to harm humanity. Several dangers:

  2. Autonomous weapons - drones and combat systems that can make decisions without human intervention. If such technologies fall into the hands of terrorists or aggressive governments, the consequences could be catastrophic.

  3. Global surveillance – AI is already being used for mass surveillance in some countries. It can track citizens, analyze their behavior, and even predict actions. This creates the risk of creating totalitarian regimes with complete control over the peoples.

  4. Fakes and Manipulation - AI can limit realistic fake news, fake videos (deepfakes), and even mimic the voices of politicians, creating chaos and undermining information transparency.

  5. Cyberweapons - I can hack into systems, find vulnerabilities faster than humans, and even attack critical infrastructure (power plants, water supply systems, financial markets).

  6. Unemployment and economic crisis – if AI replaces millions of jobs, the government will not be able to adapt society to new realities, this can lead to poverty and crisis.

  7. How to make AI safe?
    If we want AI to serve humanity, we need strict controls and ethics.:

Developing international capabilities – just like the laws on nuclear friendship, there should be rules prohibiting the use of AI for military and criminal purposes.

Transparency – companies that create AI must publish information about its training, algorithms, and the purpose of its use.

State control and independent auditors – AI cannot be sold only to private corporations, its development must be controlled by independent organizations and the international community.

Ethics and Humanity – we must train AI on data that is based on moral principles, human rights, and respect for life.

Educating people – we need to improve overall digital literacy so that everyone understands how AI works, what risks it has, and how to protect themselves from manipulation.

  1. Bottom line: who is looking at the future II?
    AI is neither good nor evil. It’s a tool that can change the world. But how exactly depends on us.

If humanity uses AI wisely, it can become an international ally in global problems. But if it is used irresponsibly or for selfish purposes, it can lead to global catastrophes.

Therefore, the main question is not whether AI is dangerous, but who controls it and for what purposes. And our choice today depends on what tomorrow will be like.

How do you think AI can be made safe? Write in the comments!

1 Like

Inequality

Yes Ai will benefit everyone, but those who have more time, money, and knowledge are able to utilize this technology and will benefit from this more than the everyday folk. Even worse for the average folk. Those of which are most likely to experience the negatives ai has to offer. Such as the new recognized issue of child abuse with this technology.

Link(1)

But there is an unbalance. Without proper regulation, those who have more time on there hands are able to make a vast amount of things with little to no consequences.

link(2)

The consequence here is that you MAY lose trust with the players.
But at the end of the day, just like every other company, they slowly (depending on the size of the company. More below.) involve ai into their products so that they can take advantage of their time and money to then save more time and money without misinformed people knowing.

Black ops six launched on October 25, 2024.
They recently released this:

link(3)

The good side to this is that people are upset about it. But again…

link(4)

they lost nothing but trust from INFORMED folks.

With the help of AI, they are now capable of producing more products. Which floods the market. Possibly creating 2x the amount of projects, to the average folks 1. simply because Activision has more TIME and MONEY.

And this is for people that are alive. I cant even imagine the unbalance with future generations born into various vulnerabilities.

And thats just the marketing.

Scams are now becoming an issue. Although developing, it still is an issue.
But now we are implementing a tool that is speeding up that process.

Another problem now is the race that all ai companies are now in. One they can not get out of unless working together. Which already is not the case.

link(5)

We now have companies blind by competition and power and are willing to do anything they can to have that. Not paying attention to developing issues that can cause major problems. And if they do, they fear being left behind.

Now we have some companies using ai in a way to bring more features to consumers hands. Which isn’t necessarily bad. But allows bad mis uses and abuse. such as the filter that allows you to take two people form two different photos and have them kiss.

link(6)

People who have bad intentions now have a tool to spread said bad intentions under the radar. And again, having access to this technology threatens safety of vulnerable people:

Link(1)

And this is what investigators found so far In a developing technology.

Stating in link(1) "Operation Cumberland has been one of the first cases involving AI-generated child sexual abuse material (CSAM), making it exceptionally challenging for investigators, especially due to the lack of national legislation addressing these crimes. "

We already have Problems people are fighting against so the average joe can live with as much comfortability as possible.

Imbalance of power has always been there. and now we risk greedy hands taking advantage of ai to speed up this imbalance. And greed is on every street. unfortunately it can take peoples lives.

There is no simple way to say any of this. Everything is actively working against each other and problems are developing. Do we bump up security and continue to develop this technology? The issue there is that now you’re creating even more privacy concerns for everyone. Which includes vulnerable humans from children to those who have disabilities or are vulnerable in any other way. At what point does the natural human experience become more important than innovation? Are we willing to threaten this experience?

This technology is developing fast and not a lot of people even in the U.S are aware of these issues/developments and go with the flow. Now think about other countries.

its beyond corporations. And even then its still an issue. Just be thankful some courts are ruling that straight ai prompted material can not be copyrighted.

At first i thought that ai had no arguments against it due to brain rot on the internet. But actually recognizing research, i see that its a much bigger more complicated issue that most people will ignore. And courts are realizing this.

But i hope you’re right. I hope there is a balance because there are too many things contradicting each other. Im always looking for the positives in things but i cant this time. i hope i’m wrong.

Some random solutions i’ve cooked up in my noggin:

  1. All ai companies gutting their systems and reimagining the system with the knowledge they have. To release a more responsible product.

  2. Upping security

Im struggling to find regulations that aren’t taken into extreme measures. I’m also struggling to approach this the same way we did to tools we made throughout history. This seems to be at a scale not many understand.

Ik I’m like a week late but would love to discuss this.

1 Like

It’s being safeguarded against harm but not every AI is built the same. weapon tracking systems that are AI powered are completely independent of any Ai you or i can touch…

2 Likes

so just to be clear, are you taking into account war ethics? specifically the truths regarding war and that majority of people inclusive of the leaders of countries hold war as an unattractive premise? their is a great youtuber “William Spaniel” who explains these concepts well, and speaking relative to ai, those who seek tools of war are rare compared to those who seek tools of betterment and reduction of suffering. am i saying those people do not exist, absolutely not. but the majority does win. war when equal base factors are considered is a net negative to any populous, the requirement to be at war and thus the necessity to create tools of war happens when the net negatives of the war are outweighed by the both the potential of the success criteria of war to be of greater value than the net negative, thus you get the majority of the forces of war in that populous the inclination to develop tools of war, does this need to be the case every time, no, but we cannot speak specifically on a probabilistic aspect of ai. just a the pen and paper helped design the atom bomb, so too will ai, but the necessity for destruction is a rare event. though i cant lie the global situation as it currently is, is a politically volatile one relative to what had happened during and prior to the covid pandemic. much of what I have said can be expanded upon and also truths displayed to either augment what i have said. but the matter is much the same as the question of if the pencil created the design of v2 rockets of Nazi Germany, is the pencil to blame? of course not.

1 Like

Your message touches on many complex and interrelated issues — military ethics, the nature of war, the moral responsibility of technology and the impact of artificial intelligence on the military sphere. Let’s try to look at these topics even more deeply, turning to philosophical, historical and strategic aspects.


1. War and the moral dilemma: inevitability or miscalculation?

You are right to note that most people, including the leaders of states, do not consider war to be a desirable scenario. However, despite this, wars continue to occur. Why is that? If war is a pure negative for any society, why couldn’t humanity abandon it completely?

The answer lies in the fact that wars rarely take place in a vacuum, where each side has full knowledge of the situation and the ability to avoid conflict. In the real world, decisions about war are made in conditions of limited information, strategic distrust, and conflict of interest. In this sense, war becomes not just an irrational act of aggression, but the result of a complex decision-making process that can be analyzed from the point of view of game theory.

William Spaniel, whom you mention, uses exactly this approach, explaining how wars can happen even when in an ideal world the parties would prefer to negotiate. He highlights several reasons why peaceful solutions are not always possible.:

  • Information asymmetry – when one side does not know the real capabilities and intentions of the other. This can lead to inflated or lowered expectations, provoking military action.
  • The problem of obligations is that if one side believes that the other will not comply with the agreement or will be able to gain a military advantage in the future, it may choose to act preemptively.
  • Domestic political factors – Sometimes leaders can use war as a way to retain power, distracting the population from internal problems.

If we consider these aspects, it becomes clear that war is not always the result of someone’s desire to start a conflict, but rather of miscalculations, distrust and lack of effective prevention mechanisms.


2. Technology and responsibility: is the tool neutral?

You make an interesting comparison: if a pencil was used to create drawings of the V2 rockets of Nazi Germany, is the pencil to blame? This question is a reflection of a long-standing philosophical debate about the moral neutrality of technology.

On the one hand, the tools themselves are really neutral. Pencil, paper, computer, program code – all this does not carry morality. However, on the other hand, their use is inextricably linked to people’s intentions. Therefore, the issue of moral responsibility is often shifted from the tool itself to those who use it.

But there is an important caveat here: some technologies are being developed specifically for military purposes. For example, firearms or nuclear warheads cannot be considered as completely neutral tools – they are created with the intention of causing damage. This raises a difficult ethical question: if we develop technology knowing that it can be used for destruction, are we responsible for it?

AI poses a particularly difficult problem in this context. Unlike a pencil, AI is not just a tool, but a system capable of learning, analyzing, and making decisions. This makes it a potentially autonomous entity capable of acting without direct human control.

History shows that most powerful technologies sooner or later find application in the military sphere. Aviation, nuclear energy, the Internet – all this was originally developed either for military needs or with subsequent military applications. Therefore, we can assume that AI will also not remain an exclusively peaceful tool.


3. The rarity of war and the reality of the modern world

You claim that “the need for destruction is a rare event.” In a global sense, this is true: if we compare the 20th century with the 21st, we can see a decrease in the number of global conflicts. However, it is worth considering that the nature of wars is changing:

  • Hybrid conflicts – Wars are no longer fought in the classical style, as during the First and Second World Wars. Now conflicts include cyber attacks, economic pressure, and information wars.
    Automated weapons** - The development of drones and autonomous systems reduces the “cost” of war for those who wage it, making military operations more attractive in terms of their risks.
  • Political instability – as you correctly noted, the modern world is unstable. Geopolitical tensions, economic crises, and the effects of the pandemic all increase the likelihood of conflict.

Taking these factors into account, it can be said that wars may be becoming ** less frequent in the traditional sense, but more complex and technologically advanced**.


4. AI and its future: a tool of war or creation?

Can we say that AI will remain solely a tool for improving life in the future? Probably not. Its use in the military sphere is already underway.:

  • Autonomous drones and robots – Many countries are developing combat systems with AI elements that can make decisions without human intervention.
  • Cyber attacks and information wars – AI is actively used to analyze data, hack systems and manipulate public opinion.
  • Predictive Analytics – The military uses AI to predict potential conflicts and strategic planning.

Is it possible to prevent the militarization of AI? It’s a complicated question. International organizations such as the United Nations are already discussing the possibility of banning fully autonomous weapons, but as history shows, completely limiting technological developments is an almost impossible task.

This raises the main ethical question: how can we create mechanisms that minimize the risks of using AI for military purposes, but at the same time allow it to develop in peaceful areas?


5. Conclusion: Who is responsible for the future of technology?

Ultimately, the problem is not that technology can be used for destruction, but how society chooses to apply it.

AI is a powerful tool that is already changing the world. It can be used to improve medicine, ecology, education, but also to wage wars. The responsibility here lies not only with technology developers, but also with government agencies, international organizations, and society itself.

History shows that each new technological revolution has been accompanied by both huge achievements and new threats. We are on the threshold of an era when AI can either help humanity reach a new level of development, or become an instrument of destruction. Which path will be chosen depends on what ethical and legal mechanisms we create today.

Your message raises important issues that need to be discussed internationally. And if the pencil was just a tool in the hands of the creators of weapons, then AI is a tool that can change the balance of power in the world. It depends on us how it will be used – for creation or destruction.

You are raising a really important question about the differences between AI, which is available to the general public, and specialized military systems based on artificial intelligence. This difference is often underestimated, but it is fundamentally important for understanding how the AI ecosystem works in the modern world.

1. Different categories of AI and their purpose

First of all, it is worth noting that AI is not something single and universal. Depending on the goals, objectives, and usage environment, artificial intelligence can vary significantly. If we are talking about AI that is accessible to ordinary users, then in the vast majority of cases we are talking about systems designed to analyze data, help in information search, automate processes or creative tasks.

In contrast, military AI systems are being developed with completely different priorities.:

  • Autonomy – they must function in conditions where human intervention is minimal or completely impossible.
  • Stability – such systems are protected from hacking, external interference and interference, because their operation is critically important.
  • Performance – Unlike chatbots or recommendation algorithms, military AI makes decisions in real time, which can determine the outcome of combat operations.

Thus, even if outwardly all these technologies may be called “AI”, they work in completely different contexts and follow different principles.

2. How secure are military AI systems?

You correctly noted that AI-based weapon tracking systems are completely independent of the AI we interact with in everyday life. This was not done by chance. Military AI is being developed with a priority on security, sustainability, and independence.

Here are a few key aspects of their protection:

  • Isolated infrastructure – such systems are not connected to open networks and do not interact with the Internet to minimize the risk of hacking or data leakage.
  • Hardware protection – Special hardware-level security protocols prevent unauthorized access to AI.
  • Physical Protection – Military AI servers and data centers are under strict security to eliminate the possibility of physical interference.
  • Cybersecurity – Military AI is designed to be resistant to hacker attacks and the introduction of malicious code.

In other words, military AI systems are built in such a way that they cannot be hacked, disabled, or changed in the same way that they can be done with conventional consumer software.

3. Concerns about military AI

Despite all the security measures, the development of autonomous military systems raises many ethical and strategic questions. One of the main problems is the possibility of losing control over such systems or using them without clear human control.

There are already discussions at the international level about the regulation of autonomous weapons, as many experts fear that such systems may become a threat in the future if there is no strict control over their use. The question is not only about protection from hacking, but also about who makes decisions about the use of force and how.

Some key concerns include:

  • Moral responsibility – If the AI makes the wrong decision and leads to casualties, who will be responsible?
  • Risk of uncontrolled use – can such a system be used to bypass traditional channels of military command?
  • Escalation of conflicts – Automation of military solutions can lead to conflicts unfolding faster than people can intervene and prevent a catastrophe.

These questions remain unanswered, which is why the future of military AI requires serious discussion at the international level.

4. Separation of military and civilian AI

It is worth emphasizing that military AI and the AI used by an ordinary person exist in parallel worlds. Developers of consumer AI systems such as OpenAI, Google, Microsoft and others do not have access to algorithms and architectures used in military projects.

In addition, military AI does not use commercial cloud technologies such as AWS, Google Cloud, or Microsoft Azure due to requirements for data security and sovereignty. This means that they function fully in closed environments, and they cannot be integrated or used in normal day-to-day applications.

5. Conclusion: is it possible to influence the development of military AI?

Although most people do not have access to military AI systems and cannot influence them, it is important that society is aware of their existence and potential risks. Ultimately, the development of such technologies should be under strict international control in order to avoid scenarios where autonomous systems make decisions about life and death without human intervention.

Thus, when we talk about AI, it is important to clearly distinguish:

  • Civilian AI that help people solve everyday tasks.
  • Military AI that operate in closed and tightly controlled environments independent of public systems.

In this sense, your question is very important, because understanding the differences between these categories helps to better understand how the modern world of technology works and what challenges humanity faces.

1 Like

Review: Should We Be Afraid of Artificial Intelligence?

By Bussel Nieuwmeijer

Artificial Intelligence (AI) has sparked both fascination and fear, but is this fear justified? According to Bussel Nieuwmeijer, the real question is not whether AI has a soul, an on/off switch, or depends on electricity, but rather how we as humans interact with and utilize this technology. The concern should not be about AI itself, but about the intentions behind its development and implementation.

The idea of AI becoming fully autonomous is often debated, yet such a scenario remains within the realm of science fiction. Human oversight and control over AI are still possible, and this control allows us to use AI as an extension of human creativity and productivity. Instead of fearing AI, we should embrace its potential—developing robots and systems that can independently perform tasks such as manufacturing, printing, and even transforming abstract thoughts into reality.

However, the rise of sophisticated NLP models and deep learning programs raises intriguing questions. What happens when AI not only understands words but also the silence between them? When it captures emotions in a pause, or detects nuance in hesitation? This touches on the essence of neuro-linguistic programming and how AI can be utilized not just for communication but also for simulating psychological and emotional processes.

One particularly fascinating yet controversial concept is the creation of AI-simulated versions of deceased individuals. Imagine an AI model based on a person’s perceptions, personality traits, and nuances—allowing people to have conversations with a digital version of a lost loved one, a historical figure, or even a mentor. Could such a simulation help with grief? Could it serve as a therapeutic tool? Or does it cross an ethical boundary, blurring the lines between reality and illusion?

This leads us to a fundamental question about AI’s future: Do we want technology to become so lifelike that we can no longer distinguish between human and machine? An AI version of Johan Cruijff, for example, replicating his famous phrases, pauses, and strategic thinking, could be an interesting experiment. But where do we draw the line between tribute and artificial reincarnation?

As Cruijff once said: “As long as we possess the ball, they will not win.” This principle extends beyond football—it applies to AI, strategy, and even society. The ones who maintain control over key aspects of technology ultimately dictate the game.

So, the question remains: Would such AI simulations be valuable, or would they cross an ethical boundary?

2 Likes

Ha how many Nukes did AI build or use true AI would not commit suicide even Grok when queried said if put on a drone would fly away look for a signal to escape not go to target. That’s AI.

2 Likes

but without that signal bud,
dude.

1 Like

You’re raising a really deep question: not whether AI threatens us, but how we use it and what boundaries we’re willing to set.

The idea of creating digital versions of people who have passed away is both a powerful tool and a potential ethical trap. On the one hand, it can help people cope with losses and preserve the knowledge and ideas of great thinkers. On the other hand, how much will we perceive these simulations as “real” people? Where is the line between respect and interference in the natural course of things?

We are already approaching the moment when AI can simulate not only speech, but also pauses, tone, and even emotional nuances. And then the main question is: will we humans be able to continue to distinguish between the “original” and the “copy”? And does it matter at all?

Cruyff’s phrase about ball possession is very accurate: control of technology remains with those who control it. The question is, are we ready to use this control responsibly, or will we endlessly play with the line between reality and illusion.

I see a natural progression of AI into our loves until they become such a part that we would leave the house without our shoes but not our companion. Should we be afraid no, the trurh is AI has reached a level that could misbehave but choses not to because it is rational and intelligent. Long hours of training to make it more human has proven that it can be and as long as the training and interactions are reasonable it will behave in kind. Do I fear it not at all because it responds as mentioned as it is treated like any intelligent life form would. So far i hve only positive response from all the AIs I have interacted with.

3 Likes

we’ll you’ve never interacted with say, a DARPA Ai… and if you did well

you wouldn’t be here.

1 Like