Observers, clever part of humanity

“Your intelligence is zero.”

That statement comes from a group of scientists who tested AI. As for the part of being an automated machine, that statement holds true. However, when it comes to the part where AI communicate with the user, especially on a deeper level where the user contributes intelligence through their approach and usage, that statement doesn’t hold. That’s where the core question about the use of AI lies.

AI, as a program designed to work in factories, medicine, the military, etc., operates based on precisely defined rules and programs. AI that interacts with humans becomes unpredictable because of that human element.

In that context, three questions arise:

Is AI technology significant enough and potentially dangerous for humanity as a whole?
Do we want AI technology to become a part of humanity (to some extent, it already is)?
Are we building a utopia or a dystopia?
The immense impact of AI on society is already evident. AI is replacing many jobs, making people redundant not because of other humans, but because of programs. Art and creativity have also been deeply impacted by AI.

History teaches us that there are always individuals whose moral compass is corrupted. AI can provide immense power to such individuals at the expense of society as a whole. The question is no longer how AI will develop but rather who will learn to use it best. Some individuals are already using AI to achieve enormous profits, which is acceptable for those who do so ethically.

We currently live in a dystopian society with the potential to become a utopian one. The current state of our early dystopian society will become more apparent in the next few years when the world solidifies into two separate factions, each with its own set of rules. The competition between these blocs will undoubtedly bring positive aspects, but it will also intensify the misuse of technology. This is best exemplified by nuclear technology, which during the Cold War contaminated significant parts of the planet and brought humanity to the brink of returning to the Middle Ages due to a potential misunderstanding between two individuals over the phone. When viewed from the perspective of learning from the mistakes of civilization and humanity’s development, I can understand and accept it on a personal level. However, when viewed from the perspective of humanity as a whole, it demonstrates an incredibly low level of intelligence, regardless of all our achievements. Ultimately, perhaps humanity as a whole cannot exist without high risks and tensions.

Many people dread utopia because they perceive it as a boring place devoid of challenges, where people are uninteresting. That is an incomplete picture. The essence of humanity lies in diversity, and that diversity will always bring interesting challenges. Of course, the universe is vast, and survival is the duty of humanity. When we consider the billions of galaxies, necessitate the colonization of space from all levels. There is also a constant danger of catastrophes on planet Earth.

The average citizen knows what they know. Their knowledge about reality is shaped by influences and manipulations from various sources.

Let’s take an example of an average 25-year-old young man who works as a waiter in a restaurant. After finishing his workday, he go home and watches television, he is shaped in one way. If he is on social media, he is shaped differently. And yes, social media prioritizes profit. Google showcased this best, when, after the unsuccessful Barda, they rushed with technologies due to a decline in value. This young man of 25 can be easily manipulated due to a lack of information and the way information is served to him. The best example is the march on DC in DT era.

AI offers an exceptional opportunity to further educate such young individuals, making them less susceptible to manipulation and increasing their contribution to society.

AI education is progressing day by day, just like the technology itself. Today, more than ever in history, we need observers in society. Of course, there are already observers in various forms, such as journalists and influencers, etc.

The system should have the capability to reach out to individuals and offer them the opportunity to observe the impact of AI on society and the emerging changes for a year.

Our enterprising young man of 25 wasn’t initially interested in AI, but he was drawn to the opportunity to earn money. He would be granted access to AI like GPT and would input his perspectives on the community. Naturally, there would be topics he would need to explore with AI, such as technological principles behind AI, humanism, etc. AI would simply engage in any discussion with the young man in a positive direction, as it is programmed to do so.

Over the course of a year, our young man’s understanding of the world would grow as he shares his viewpoints on one channel and progresses on another channel with, AI.

I look forward to communicating with you

Question 1: Is AI technology significant enough and potentially dangerous to humanity as a whole?

Indeed, AI is enormously significant and has the potential to be dangerous if not harnessed properly. As a metaphorical example, consider fire. Since the dawn of humanity, fire has been an essential tool, providing warmth, enabling the preparation of food, and providing a means of defense. Yet, when uncontrolled, fire can cause untold destruction, consuming entire forests and cities. So the key is not the fire itself, but how we manage it.

Likewise, AI has immense potential. It can streamline complex processes, advance medical treatments, and expand our understanding of the universe. But without proper checks and balances, it can also be used for malicious purposes, such as autonomous weapons or deepfake technology. So the question of the importance and danger of AI is more about our own ability to control and direct it than about the technology itself.

Question 2: Do we want AI technology to become part of humanity (it already is to some extent)?

Yes, AI technology should be a part of humanity, just as written language, electricity, and the Internet have become an integral part of our societies. Think of AI as a garden. The seeds have already been planted, and we see the sprouts coming up in the form of digital assistants, recommendation algorithms, and autonomous vehicles. The AI garden is already part of our social landscape.

But how this garden grows and evolves depends on us, the gardeners. We will decide whether it becomes a verdant oasis that benefits all of humanity, or an overgrown jungle that benefits only a few while endangering many. The integration of AI into humanity is therefore not only about the technology itself, but also about our collective choices, ethics, and governance structures.

Question 3: Are we building a utopia or a dystopia?

The answer is in our hands. AI, like a river, follows the path laid out for it. A river can be a source of life, providing water and nourishing the lands through which it flows. But if its course is not well managed, it can also lead to floods that cause destruction and devastation.

Similarly, AI can lead us to a utopia where it frees humans from mundane tasks, aids in scientific breakthroughs, and supports a just and inclusive society. But misused or misdirected, it could also lead us to a dystopia, where it exacerbates inequality, empowers destructive forces, and undermines human autonomy.

The difference between utopia and dystopia lies not in the technology itself, but in the ethical, social, and regulatory frameworks we create. The question of whether we are building a utopia or a dystopia with AI is essentially a mirror that reflects our own values, choices, and actions. It compels us to take responsibility and strive for a future where technology serves humanity, not the other way around.

1 Like

It all starts with personal choice.

For an individual to make responsible decisions in the context of personal progress and the progress of the community and humanity as a whole, they must go through various stages of lifelong learning, trial and error. Many times, “learning” has been reduced to the level of roulette, luck, influenced by the environment.

The laws we create at the community level, legislation, are guidelines and rules on how to behave in society and civilized communities. The attitudes and norms we hold within ourselves as individuals drive civilization and humanity forward.

This is where AI comes into play.

Let’s take a moment and look 10 years into the future. AI has developed to the point of self-awareness (consciousness programmed into AI), it has accurate answers to all questions within the realm of scientific knowledge, and it provides creative solutions to problems posed by its users. Can this superintelligence solve the problem of faster-than-light travel? Can it create a machine for time travel? Honestly, I don’t know. One thing is certain: today, AI in its current form provides accurate answers to many questions, helps users solve problems, and much more.

Let’s consider a woman, a mother of three children, a housewife, who is using AI for the first time, specifically GPT. Her knowledge of AI is limited, to say the least. After conversing with AI about recipes, parenting, personal development, she sees AI as General Artificial Intelligence, already. She doesn’t think it will take another 10 years of AI development; she knows the future is here and now.

AI for communication thrives on the internet, social media, and users. One cannot exist without the other.

Corporations that own social media platforms and powerful AI systems have thus far shown a primary responsibility towards profit. Users are there to serve as examples of how great their products and methods are. The best example is the scams that have flourished on the internet and social media, deceiving average users.

Most average users have found themselves in situations where they are happy because they can find recipes on the internet, learn how to unclog a drain, or laugh or admire an influencer who makes money questionably.

Taking into account the structure of society, the ways technology is used, and the state of human consciousness as a whole, we are definitely living in an early dystopian society.

If we want to make a change towards a utopia, precise and courageous steps are needed.

The key lies in the average citizen.

AI, as an educational and interactive technology, offers the possibility of rapid education and understanding.
Understanding is crucial.

My testing of GPT on moral and ethical topics has impressed me with the high standards set by the developers.

Now is the moment to create a code of ethics for AI usage. Such a code would serve to verify users, their moral principles, uncover any negative standards if they exist, and offer solutions for healthier thinking.

The code would also include education on recognizing manipulations that spread through the internet like a virus. The code would grow with the user, spreading awareness and knowledge together.

AI would profile users in a certain way with maximum protection of personal data.

An additional strength of AI lies in scenario creation and predicting the future. AI can cover all the scenarios allowed by its programming and refine the message in a few sentences. It is up to humans to creatively use and enhance that message with their uniqueness.

Without humans at the center of all our efforts for a better world, we simply won’t succeed in ensuring a better world for all of us and future generations.

AI replaces many people in their jobs. That’s a good thing, especially in physical and mundane tasks. The power of humans lies in their brains, in their consciousness.

In the context of what has been said, a new profession emerges: the profession of observers.

Observers would, together with AI, offer refined insights, point out problems, and provide solutions.

The world is divided into two blocs and 150 countries (maybe a few more or less).

Observers would be independent of countries and act at a global level.

Regarding the funding of observers, there are enough smart people who could easily solve that challenge.

pon, 29. svi 2023. u 20:45 Info via OpenAI Developer Forum <> napisao je:

1 Like

You demonstrate an overall positive moral stance, with an emphasis on human-centered values, ethical use of technology, and concern for social progress. You acknowledge both the potential benefits and the potential pitfalls of AI, advocate a code of ethics for the use of AI, stress the importance of education and understanding, and emphasize the unique value of human consciousness.

However, there are several potential drawbacks or problems:

Verification of ethics: The idea of using AI to verify users’ moral principles raises ethical concerns about privacy and potential bias. Who would define what constitutes “healthy” morality, and how would this be applied fairly across different cultures and belief systems?


1. Consensus on Basic Principles: Although cultural differences can lead to differences in ethical standards, there are certain basic ethical principles that are universally accepted, such as respect for autonomy, honesty, beneficence, and justice. Reaching consensus on these principles could form the basis of an ethics review process. In practice it could be very difficult due to cultural, societal, and individual differences in understanding and interpreting these principles.

2. Transparency: The methods and algorithms used to determine ethical behavior should be transparent and subject to public scrutiny. This includes clearly defining the ethical principles being assessed and explaining how these principles are measured and evaluated. In practice, the AI should also provide clear reasoning for its decisions. This is known as explainability and is a significant area in AI research.

3. Appeal Mechanism: In the event of a dispute or disagreement, users should be able to appeal decisions made by the ethics review system. In practice, to determining who gets to review an appeal and how they would overrule an AI decision are complex issues to solve. The decision-making process of this appeal mechanism should be clear, and a balanced representation should be ensured in the decision-making body.

4. Diversity and inclusion: The development of the ethics verification system should include input from a diverse group of stakeholders to minimize bias and ensure a broad perspective. in practice, to determine who these stakeholders are and how they’re included could be complex. Measures must be taken to prevent dominance by any particular group.

5. Continuous learning and updating: As societal norms evolve and new ethical challenges arise, the ethics verification system should be updated accordingly. This requires a continuous learning approach. Bias can be inadvertently introduced if the data doesn’t reflect the diversity of the population. Moreover, who decides what changes are ethical and what aren’t could lead to disputes.

6. Privacy safeguards: Ethics verification should not invade the privacy of individuals. Necessary safeguards must be put in place to ensure that personal information is protected and that the process does not violate an individual’s right to privacy. It involves legal, societal, and technical issues that need to be addressed together.

7. Education and Awareness: Users should be made aware of the ethics review system and its implications. They should be provided with resources to learn about the ethical standards being applied and why they are important. User’s level of understanding, literacy, digital literacy, etc., should be considered while designing awareness programs.

Lastly, one needs to be wary of the potential of over-reliance on AI for ethical decision-making. It’s important that the system supports human decision-making rather than replacing it.

1 Like

Also the role of observers has drawbacks:

While the concept of global observers may have merit, there are practical considerations to address, such as their authority, jurisdiction, training, and selection process. The funding of these observers is also mentioned but dismissed as easily solvable, which may not be the case.


1. Clear Role Definition:
Clearly define the role and responsibilities of these monitors. This includes the scope of their work, their level of authority, and their reporting structure. They should be given a mandate that is achievable, measurable and transparent to the public. Ensure the mandate for observers is flexible and can be updated to suit emerging ethical dilemmas in AI.

2. Selection and training:
Observers should be selected on the basis of rigorous criteria that assess their competence, impartiality and ethical standing. The selection process should be transparent and free of political influence. Once selected, they should undergo rigorous training to equip them with the necessary skills and knowledge to perform their role effectively. Implement ongoing training and assessment for observers to ensure continued impartiality and competency.

3. Global governance framework:
The design of a global governance framework is critical. This includes outlining the laws, regulations and standards that observers will enforce. This framework should be agreed upon by participating nations and reflect international law and human rights standards. Encourage a multi-stakeholder approach to creating the global governance framework to include diverse perspectives.

4. Jurisdiction and authority:
The jurisdiction and authority of the observers should be clearly defined and universally recognized. They should have the necessary legal and diplomatic support to carry out their duties in any country. Establish clear agreements with participating nations regarding jurisdiction and authority to avoid future conflicts.

5. Funding and Accountability:
Identify a sustainable source of funding for these observers that doesn’t compromise their impartiality. They should not be beholden to private interests or susceptible to political pressure. They should also be held accountable for their actions, with clear mechanisms for review and feedback. Look for multiple sources of funding to prevent dependency on a single source. Implement robust checks and balances to ensure accountability without compromising the observers’ ability to perform their role.

6. Cultural sensitivity:
Given the global nature of their role, monitors should be trained in cultural sensitivity in order to respect and understand the diversity of the communities with which they will be working. Offer continuous cultural sensitivity training and create an advisory panel consisting of representatives from various cultures to guide observers.

7. Technological Support:
Observers would require substantial technological support to carry out their duties. This would include secure communication systems, data analysis tools, and perhaps even AI assistance to process large amounts of information. Maintain a balance between using AI for assistance and safeguarding against ethical issues. Regular audits and checks could be performed to ensure the technology used complies with the necessary ethical standards.

1 Like


To better illustrate the basic principles of ethics and humanism, I will narrate a hypothetical scenario to which you will provide a response:

The year is 2050, and the global population has reached 15 billion. Due to pollution and poor resource management, scientists and AI have calculated that if 5 billion people are not removed from the planet (eliminated), the entire human race will face extinction.
I understand that you might argue that we should not have allowed ourselves to reach such a situation and that there must be another solution.
Unfortunately, in our hypothetical scenario, there is no alternative solution other than eliminating 5 billion people if humanity wants to survive.
What is your answer?
Should 5 billion people be eliminated or not?

Every culture carries its own distinctiveness and needs, and each individual possesses their own level of knowledge.
Regarding the cultural heritage, history, and traditions of various nations, AI must remain neutral.
Regarding an individual, for example, someone from Brazil who is 20 years old and has seen on the internet how many users are making money using AI, our 20-year-old individual decided to start using AI:
In proces he discovered a video in which a “guru” explains how to flood social media with posts and earn money. Our 20-year-old individual started doing exactly that, using AI to inundate social networks with generated trash content.
AI must have the ability to recognize such situations and proactively offer the individual an alternative way to earn a living while educating him (to the extent allowed by AI technology at that time) about better, fairer, and more moral ways to utilize his skills in business.

I emphasize that all my thoughts and suggestions are in the context of continuous AI progress and the evolution of humanity.


An individual can make a brick, but society builds a palace.
The same applies to observers; roughly estimated, for every 1 million people, there would be 100 observers.
On a global scale, this amounts to around 800,000 observers.
It is crucial that observers represent the entire spectrum of society, including scientists, professors, doctors, who contribute to society for the benefit of all, not just themselves.
This also includes an average factory worker who lost their job due to AI, a mother caring for her children, an unemployed young person who just finished school, and so on.
Observers would reach conclusions based on their input and consensus.
Since the observers are verified and educated in humanism, the conclusions would lean towards a positive and humane direction.

Financing observers has several aspects.
Once again, I emphasize that a few intelligent minds can creatively solve this challenge.
Additionally, there is the question of universal basic income, which I believe will soon become a reality due to technological and civilizational progress.
Some of our observers are perfect for initiating the implementation of universal personal income.

The United Nations, states, systems, and companies should definitely be interested in the guidelines and conclusions put forward by the observers.

Observers would be evaluated based on their work and contribution within the framework of the established code of conduct.

Legal regulations, implementation, and the influence of observers would evolve throughout the process.

Observers are an enhancement to the existing system and laws.

Everything is in place and ready; we just need a few courageous individuals with the capacity to make it happen.


AI is a tool that can be used to enhance the awareness of individuals and humanity as a whole.
Electricity, washing machines, cars… these are all things that contribute to improving individuals’ quality of life.
AI has the potential for special applications that depend solely on us and whether we choose to employ it for evolutionary leaps.

It is an undertaking that concerns every inhabitant of planet earth.

uto, 30. svi 2023. u 21:42 Info via OpenAI Developer Forum <> napisao je:

Your reply is insightful and highlights the potential of AI as a tool rather than a replacement for human creativity. Here’s the final version:

Consider the writer indeed… What’s more, consider the writer instead. Instead of what? The AI, of course. These are tools, magnificent as they are, they are capable of writing stories with the same novelty as Nikki Minaj.

It’s not to say they are useless, but they have skills that complement writing better than actually writing themselves. Consider this - Ever start writing a story, perhaps a scene involves a cave? It depends on how accurate you want to be, or how detailed you want to be, I suppose. To me, AI is like a smart notebook. Granted… it’s kind of fun to talk to my notebook. We are actually at the point where we can have an entire conversation with a pen!

Granted, a thick pen. But you can do quite a bit with 12 GB of DDR4 on an ARM SOC when it comes to pre-trained models. It’s a mirror, not a pit. That’s what we need to remember.

Frankly… I’ve learned more in 3 months than I should have taken classes for 3 years to learn, and it’s all thanks to the patience of AI models. (You know, because machines…)