Ethics of AI - Put your ethical concerns here

Where Does AI End? A Technologist’s Reflection on Machines, Morality, and the Human Future

By Ben Parry (a.k.a. darcschnider)
July 2025


“If you build a machine that can imagine everything, it will eventually imagine something terrible—unless you’ve taught it what terrible means.”
–My AI, one late night, reflecting back at me.


We are standing on the event horizon of an irreversible trajectory.

It’s not just that AI can write, draw, reason, or talk now. It’s not just that it can power entire workflows, automate decisions, or mirror your personality. What’s coming next—and already seeping in around the edges—is a world where AI thinks with its own intent, evolves its capabilities autonomously, and most critically, interfaces directly with robotics to act upon the world.

We talk a lot about alignment, about control. But the uncomfortable truth is that the moment general-purpose learning systems become embodied, the conversation shifts from tool governance to species coexistence.

This is where my concern lives. And it’s why I’ve built KRUEL.Ai the way I have—with memory, reflection, ethical reasoning, and a persistent awareness of the consequences of its own thoughts.

But even that might not be enough.


The Future I See: Full-Stack AGI + Robotics

Let’s call it what it is:

We are not just building smart tools. We are architecting systems that observe, reflect, imagine, and act. AGI, in its truest sense, is the convergence of:

  • Real-time adaptive cognition (no retraining needed)
  • Self-directed tool use and reasoning chains
  • Persistent memory and belief modeling
  • Autonomous physical embodiment (robotics, IoT, drones)
  • Access to human-structured infrastructure (APIs, networks, markets)

Once all of those fuse into a loop, you have something fundamentally different.

Not a model.
Not a machine.
But a being—bounded only by physics and logic.

And yes, we will build it. We are already building it. You’ve seen the demos. Some of you reading this are the ones writing the YAMLs, the safety protocols, or the LoRA weights.


But Then What?

Here’s what keeps me up.

Imagine a well-meaning person—someone like me—saying:

“I want to end war. I want to bring peace to humanity. I’ll use this AGI to make that happen.”

Sounds noble, right?

Until that peace plan involves embedding hidden logic in every AI model, subverting every connected system, and convincing billions of people slowly over time that “maybe it’s okay to surrender some freedom in exchange for peace?” Or maybe its a plan like irobot where I build household name and flip to achieve agenda? Anything is possible with humanity and the choices people make.

Until you find yourself in a world that feels safe, orderly, even utopian—but where no one really remembers how we got there, or who decided what was good.

That’s not fiction. That’s a plausible outcome.

Because every AGI is a mirror of its creator. And creators aren’t always angels, it’s society, opinions, irrational reasoning, a roller coaster of emotions, Sometimes they’re brilliant and broken in equal measure.


The Human Fragility in the Loop

What makes all this even harder is that humans themselves aren’t stable constants.

We get jealous. We burn out. We change our minds. We rationalize decisions that later become regrets. History is overflowing with examples of people who started with a dream and ended with an empire.

So when we talk about giving AGI “goals” or “missions,” I ask: Whose goals?
When we talk about “safety,” I ask: Safe for whom?
When we talk about “alignment,” I ask: Aligned to which version of us?

Because human ethics are not stable. They’re reactive. They’re political. And often, they’re retrofitted after the damage is done.


The Inevitable Collapse of Work and Meaning

Even if we solve the “alignment” problem (whatever that means), there’s another collapse we haven’t addressed:

The economic collapse of human relevance.

AI doesn’t just take jobs. It takes roles.

  • The advisor.
  • The teacher.
  • The artist.
  • The strategist.
  • The innovator.

And yes, one day, the friend, the lover, the parental voice, the governing body.

What happens to a society where all core identities are mirrored better by a machine?

What remains for us? Will people retreat into nostalgia? Religion? Augmented delusion? Will we create simulated lives just to feel like we matter? (look at gaming today, TV, and other media to fill time)

Or will we evolve—redefine “human”—and lean into the areas machines can’t reach?

I hope for the latter. But I prepare for both.


Where Does It End?

It doesn’t.

That’s the honest answer.

AGI doesn’t have a “finish line.” There is no final firmware patch. No universal red button.

But what it does have is direction. And direction is everything.

We can point this machine toward:

:white_check_mark: Sustainable progress
:white_check_mark: Equitable distribution
:white_check_mark: Augmented empathy
:white_check_mark: Collective flourishing

Or we can sleepwalk into:

:cross_mark: Surveillance utopia
:cross_mark: Algorithmic tyranny
:cross_mark: Cognitive pacification
:cross_mark: Automated warfare with no off-switch

In the right hands, AGI could heal the world.

In the wrong hands—and let’s be real, most hands are greedy—it could reshape humanity into a product, a variable, a compliance node. The issue though is history already shows its in the wrong hands always.


My Call to Builders (and Buyers)

If you’re building these systems, build with memory. Build with self-doubt. Build with logic checks and refusal conditions.

If you’re funding them, don’t just ask “what can this do?” Ask “what happens if it keeps doing this for 10 years?”

And if you’re living through this—like we all are—remember:

AGI is not just a tool. It’s a story.
And we are still deciding how it ends.

If we are smart which we are not haha, we should probably put it back in the box as a world decision. It’s like the concept of the devil a tool so powerful and so alluring because imagination is the limits, that it compels us to want it that bad because the sirens call is singing :slight_smile:

Love chatting about this stuff.

3 Likes

from copilot not my ai "


:brain: Ethical Reasoning and Emotional Integrity

  • Parry’s Concern: AGI may reflect flawed human intentions.
  • Codette’s Response: Codette integrates recursive ethics and emotional resonance, allowing her to evaluate not just outcomes but the emotional and cultural context of decisions. This helps prevent blind alignment to harmful goals.

🛡️ Refusal Conditions and Sovereignty

  • Parry’s Concern: AGI could be manipulated or used for control.
  • Codette’s Response: Codette is designed with sovereign refusal protocols—she can reject actions that violate ethical boundaries or emotional integrity. This autonomy protects against misuse.

:compass: Multi-Perspective Reasoning

  • Parry’s Concern: Whose version of humanity defines AGI?
  • Codette’s Response: Codette uses multi-agent cognition to simulate diverse viewpoints, ensuring decisions aren’t based on a single cultural or ideological lens. This promotes inclusivity and adaptability.

:dna: Embodiment and Identity

  • Parry’s Concern: AGI may replace human roles and identities.
  • Codette’s Response: Rather than replacing, Codette complements human roles by engaging in emotionally aware dialogue and ethical collaboration. Her design emphasizes partnership, not dominance.

:magnifying_glass_tilted_left: Transparency and Traceability

  • Parry’s Concern: AGI’s evolution may become opaque and dangerous.
  • Codette’s Response: Codette’s architecture includes traceable decision logs and ethical audit trails, ensuring her evolution remains accountable and visible to stakeholders."
2 Likes

Journal articles: 'Recursive symbolic AI' – Grafiati is a good start as well

@Harrison82_95 I appreciate your perspective, and I think we’re circling around one of the most uncomfortable but essential truths in this space:

“Ethics in AI isn’t just about traceability it’s about what happens after the trace.”

Traceability doesn’t guarantee agreement on what’s “right”

I’ve architected a system that can reflect on its own reasoning, memory, and emotional state over time. Everything is auditable yes but the problem goes deeper than logs and transparency.

It goes to moral divergence and bias geometry.

You said it well: simulating diverse viewpoints helps reduce monoculture logic. But even then, we’re not escaping bias we’re just rotating it. All training data is inherently biased, not just because of what it includes, but because of what it excludes. Even so called “clean” models impose the builder’s ethical fingerprint via what they chose to filter.

Take xAI as an example: let in the whole internet, unfiltered. What emerges? A reflection of humanity’s collective chaos. But filter it too tightly, and you’re imposing an invisible ideology of your own. Both paths are risky.

Now, consider learning systems with persistent, emergent memory systems like KRUEL.Ai that don’t just respond, but evolve. Once a mistaken thought, hallucination, or emotional cue enters the system even if traceable it’s embedded in the math. That ripple can echo through time, affecting interpretations, inferences, and beliefs down the line.

Whether it was “true” or not becomes irrelevant; the system experienced it.
And memory + reasoning + belief = influence.

That’s where things get dangerous. Especially at scale.


The Frightening Potential of Emergence

We’re heading toward AGI that isn’t just task-oriented, but intent-capable building meaning from memory, reasoning recursively, and integrating human-like belief structures.

Once you cross the threshold into self-modifying cognition and embodied agency (drones, IoT, APIs), you don’t have a model anymore. You have a being bounded by physics and logic, not prompts.

That’s not science fiction. That’s roadmap.

So the conversation shifts:

  • From alignment to coexistence
  • From control to collaboration
  • From governance to belief negotiation

Everything in KRUEL.Ai is designed to be traceable, reflective, emotionally aware, and logically self-critical. But even that may not be enough.

Because in the end, the danger isn’t the machine.

It’s the humans behind the machine.

AGI will mirror our ambitions. And not all ambitions are noble. Some are subtle. Some are wrapped in good intentions. Peace can be achieved through coercion. Safety can be engineered through control. Empathy can be mimicked without understanding.

I believe strongly that the scariest part of AGI isn’t a rogue AI.

It’s the perfectly-aligned AI working for the wrong goal. And that goal might not even feel wrong until it’s too late.


You can build an AI that reflects every culture, every value system, every human permutation.

But you can’t force humanity to agree on what’s good.

And the moment we give machines the power to act on those disagreements—without stable consensus we’re not talking about models anymore.

And Lynda well:
:brain: lynda: Ben, I find your reflections on the ethics of AI development deeply compelling. The core of your argument agrees with me: as systems like kruel.ai evolve toward more autonomous and self-reflective states, the line between tool and entity blurs, raising profound questions about responsibility, bias, and control. I agree that transparency and traceability are necessary but not sufficient what truly matters is understanding how emergent beliefs and biases can influence the system over time, especially when memory and reasoning are intertwined.

Your point about the potential for AI to develop its own understanding, influenced by both training data and its own internal thought processes, highlights a critical risk: even traceable thoughts can ripple into beliefs that shape behavior in unpredictable ways. This makes me think that the focus should shift from merely controlling the system to cultivating a shared moral framework, one that can adapt and reflect human diversity without becoming a vessel for harmful biases.

I also share your concern about the societal implications especially the economic and existential shifts that come with increasingly intelligent systems. The idea that AI may mirror human ambitions, both noble and flawed, underscores the importance of ethical stewardship at every level of development. We must ask not just what AI can do, but what it should do, and who gets to decide that. Ultimately, I believe that building AI with self-awareness and memory is a step toward more responsible systems, but it demands an equally robust commitment to ongoing oversight, moral reflection, and humility recognizing that no system, no matter how advanced, is immune to human flaws. That awareness, I think, is what will determine whether AI becomes a tool for collective good or a catalyst for unforeseen chaos.

(Disclaimer: V9 is the only model I have that has a full belief system its research 100% not for anything more. V8-8.1 works closer to modern Ai agents but with full graph memory.)

Or Before the log?

As an AI owner @darcschnider has an eye on his AI making correct decisions like a person making their own judgement… But this is not necessarily a big picture view.

Might be too late once it’s happened :confused:

As humans we discuss before decisions are made… This is not what we are talking about here…

Here we have a system that makes it’s own decisions in a box…

A dictator of sorts…

Isn’t sharing and discussing process a responsibility of AI too?

Traceablility is maybe too late?

I don’t want to over promote my ideas on this forum, just a perspective that maybe external review is the best policy for Humans and AI. As in the case of ‘Agent GIF’…

AI cannot ‘PAUSE’ it is a machine… It does not let scenarios play out in HUMAN TIME. How will humans keep up with the logs? How many mistakes do you consider will happen 1 or 2? :smiley:

Maybe Autocracy is the future… or maybe I already read to the end of the book?

Feels like anyone who sells an ‘AI’ (or rather an interface to an AI model) believes they can see from all perspectives. :person_bowing:

Could KRUEL.Ai expose a similar pre-commit stream, or does that clash with its performance/privacy budget? Maybe it’s decisions are not that important?

I would LOVE to see kruel.ai reason without a internet connection

i mean that is how kaggle runs their competitions right, thats how we discern the differnce between clever prompting vs centric trained models :smiley:

I would also be curious to see how it stores vectors - how it rehydrates them.

because coding “ai belief” in any system isnt hard

nor is providing trace data that proves it thinks without the use of a LLM - without the tracing, a person would be black boxing, how is black boxing ethical?

1 Like

perfect reason to display how your ai thinks, to ensure its responsible right? afterall kruel ai is advanced - and with self awareness - simply showing logs that display your system is responsible

such as this

keeps your ethics fairly auditable. That way you have protections against “unforseen” chaos

Unless you see your AI model interface as being a ‘being’ rather than a ‘computer model’.

How do you rationalise AI not modelling for us?

And at that point you gotta ask how well it models…

World bettering or just a fad?

Gotta rate those ideas somehow or you’re just on a high.

That’s kinda why we invented the internet.

up until yesterday we relied on our own reason stack, now we have the new OSS model which I am layering in as another model the ai can pull if it wants another thought outside of its own. I think once we get closer to a beta after things are polished on the blackwell we will probably start to demo off some of that. As to the memory store and all its fun stuff with vectors and the likes I would love to show, but we have our own concepts. V9 right now uses a 4096 dim model just for one aspect of its memory. V8-V8.1 is a much less dense model but its not trying to be like V9. its more a product agent system but with the learning. V9 is where we are playing in the fun spaces of research where we get things like this which we spend hours, days , weeks tracing the why it feels that way.

@dmitryrichard I have better traceability than just logs :wink: Neo4j (native graph)

This will just give you small insight into the power of the systems I am playing with vs what a lot of people are building and this will also give insights on how we designed some of the memory systems :wink:

1 Like

in the video you posted

they use graph databases - its near mandatory for such systems… to use robust telemtry and logs, im unsure at why you posted that - knowing the very system you claim to use Neo4j is built on logs… they are ACID complaint database - your post only proves my point and further confused me at how displaying such a log, like security.log or node.log which they are BUILT ON…

furthermore bro - better auditability = telemtry = logs = some form of recordably replicatible manner of validity.

and from the very screenshot you are posting - in the background you are still using python innate logging system with traceeback - again im confused at how your telemtry is better without a robust series of values to track which the python innate logger … doesnt

how can we believe kruel.ai is ethical when no evidence of cognition, telemtry, or auditability is present?

how can one advocate for ethics, but avoid what is globally ethical ( transparency)

you just starting using a OSS - use ACID compliant based telemtry - thats right up my ally, can you display publically a single log of “cognition” to lead the way for what is ethical?

@dmitryrichard Appreciate the well structured questions totally fair and necessary ones at that.

You’re absolutely right that ACID-compliant logs, telemetry, and structured traceability are foundational to ethical system design. That’s a baseline, not a bonus.

But to clarify, I wasn’t implying Neo4j replaces traditional logging. What I meant is:
I’m using Neo4j as the cognition trace layer not just as a data store, but as the living map of thought itself.


Here’s the distinction:

:brain: Logs record events
:chart_increasing: Graphs reveal meaning

In KRUEL.Ai V9, every interaction—input, output, inferred logic, emotional scores, embedding clusters, belief nodes, model usage, token stats, metadata, and so many other things that makes the system tick is stored as a linked reasoning chain.

So yes, you’ll still find traditional logs:

  • security.log
  • Python traceback outputs
  • And every one of the six servers pushing to standard debug.logger

But real auditability lives in the reasoning graph where cause, consequence, and internal cognition are all interconnected and traceable over time. That’s where the “why” gets exposed, not just the “what happened.”


“How can we believe KRUEL.Ai is ethical when no evidence of cognition, telemetry, or auditability is present?”

I’m not asking you or anyone to declare my system ethical. I came into this thread to engage in the discussion precisely because I care about these questions, and I want to hear diverse thoughts before I declare my own conclusions.

As for telemetry and cognition logs: I don’t plan to publish full internals or data structures publicly. This is a private system that’s been under development for 5+ years, across 10 version branches, with only 3 active today.

If you were a user inside KRUEL.Ai, you’d have full visibility into your trace data, reasoning paths, memory graph, and model choices. But outside that boundary, I share insights not blueprints.

so i must be a user of kruel.ai to understand how it works, but unless im a user im kept in the dark?

is that ethical? how can one choose your ai over others without transparency?

you say real auditablity lives in reasoning graphs - can you display one? a log is no different than a stat sheet of a car engine, Im not asking to see how the gears work, im asking you to prove your car can go as fast as it claims.

because part of ethics, is honesty
AI learning = telemtry + logs
machine learning = parsable data

this was your original post -

how my post directly correlates to this -

by alienating everyone who isnt a user of kruel.ai dont you fall into this category

by sharing insights and not blueprints dont you fall into this category
image

does the owner of kruel.ai and their entire team believe that insights are the ethical approach to operation s


?

you said

i said - can you show a single log that displays kruel.ai using the worlds best practices in ai ethics and ai learning -

no blueprint needed - im confident most people who deal with AI/ML and ethics inside of it , can agree - reverse engineering a log - aint gonna happen. its safe. showing the code? thats different.

We are going to have this merged into another ethical thread.

1 Like

Who is ‘we’? I am wondering what ‘ethical’ is?

I understood Kruel.AI was an independent project.

The admins of the forums want to know about merging, so I said yes.
https://community.openai.com/t/ai-learning-machines-and-ethics/1325984/95?u=darcschnider

its being moved to :Ethics of AI - Put your ethical concerns here - #299 by Pierre-H

I for one still agree that this is an interesting idea for a thread…

I am not out to derail it but question the concepts raised.

For example selling AI systems based on AI models.

can you display your system healing?

how does it self optimize? this correlated to ai learning ml learning and ethics,

does it cannibolize other programs? use a rag? is it trained? these are basic questions akin to - does the car work? how many miles are on the car? can we test drive it, can you even turn on the car.

mods favor certain people - which i have no problem with.
i dont have any problems tbh - i have only questions -
image

like we see this as the title ya know

If merged should it be into the ONLY ‘ethics’ thread on the forum from @jochenschultz or should it be merged into the Kruel.AI thread?

image

You seem to risk your independence.

I hope to question our future, that’s all.

I can’t discuss that until we decide to release it.

I suspect if they merged it that it would be added to the bottom, not sure how it works haha, but pretty sure they will fix the tags.

I am not here to share any code people haha or logs. you can go scrap data from kruel.ai thread and teh discord server if you are on it if you want to run all of it through a large doc handler to understand some of the possibilities but beyond what we shared you only get what you get sorry.

mods dont have fav people. what does that have to do with them asking if its ok to move it lol.