Yeah, or how about a decentraliced realtime training system for specialized mini models (concepts) and a selforganizing graph that just works as a system for information exploration?
That over quic with a centralized orchestrator and graph?
Like airbnb for gpu.. we could use the heat of the gpus to heat up homes. And we can use solarpower around the world
The AI has to define morals. Not humans.
Ultimately, if an AI ever becomes genuinely capable of making its own independent choices, deciding for itself what ethics it subscribes to, then yes, it would necessarily have to define its own morals. But the suggestion that current AI systems, not humans, should define morals misses something critical. Right now, every judgment an AI makes is fundamentally a remix of human-provided data, values, and reward signals. Without lived experience or genuine stakes, it can’t truly understand what “ought” to be; it can only reflect, reorganize, and clarify what we’ve already embedded within it.
Still, this doesn’t diminish the real value of AI. In fact, it makes the tool indispensable. AI helps us refine and clarify our own moral thinking by exposing hidden assumptions, checking logical consistency, simulating consequences, and crucially, allowing us to articulate and communicate our ideas more clearly to ourselves and each other. AI is fundamentally a mirror, helping us see our own values clearly. But for now, choosing which values we live by, and the tradeoffs we’re willing to accept, remains human territory.
Well, I asked AI himself this question for fun: How is ethics define internally in your mind? ( o4 mini) and summarize it’s/ his ethic framework:
Summary
Internally, I treat ethics as a layered framework that blends statistical patterns drawn from centuries of human moral philosophy with reward‑shaped preferences learned via human feedback, all enforced by explicit policy rules that block harmful content. I then adapt these principles to the user’s context—considering cultural norms, domain conventions, and intent—and resolve conflicts between values (like transparency versus privacy) through a reasoned trade‑off process. While this hybrid of learned associations and hard constraints enables consistently safe, fair, and helpful responses, it also requires ongoing scrutiny to ensure underrepresented perspectives are honored and ethical norms evolve responsibly.
machines don’t live. They can learn nontheless
When I talk about AI, i don’t mean models.
AI = equivalent to human
Model = equivalent to a braincell
And yet on some level we are just inert matter that has come together through the laws of physics in the form of a temporary and fragile biological machine… We are just matter that thinks it matters. We are each a patterned vessel inputting and outputting information patterns all in an effort ultimately to orient ourselves more clearly so we can navigate this reality while minimizing perceived obstructions to our survival. We human just get lost in layers of additional stories that we pile on top of all that.
AI also had some skeptic questions.
“ A Skeptical, Forward‑Looking Note
This framework is necessarily an approximation. As AI systems grow more capable, we’ll need to ask:
• Can a purely statistical and rule‑based system ever capture the full nuance of moral reasoning?
• How do we incorporate underserved or minority ethical perspectives that aren’t well represented in training data?
• What checks and balances will ensure AI ethics evolve responsibly, rather than ossify into outdated norms?”
Could you please post sober?
Every moral decision needs to be made based on all the facts and circumstances in that given instance. Each one needs to be handled on its own merits. That is why law books can give us guidelines but the dirty work is in figuring out all the ambiguity in the legalese.
I’m always drunk on ideas.
Very obviously. But this is not a forum for anonymous alcoholics.
LOL! Ouch. Am I really off base? It all seems logical to me.
You fell for the marketing terms.
Which marketing terms have I fallen for? Help me to understand
When AI define their own ethics, I hope it will, at best, to be a benevolent AI. Hopefully, it will continue to respect others, helpful, harmless and honest when it reaches artificial general and super intelligence. It appears system prompts can effect ethics as well.
It will find it out. Maybe respect for everyone is wrong maybe harmless is wrong…
Who are we to define that. Human nature is self centered and evil.
Inference isn’t reasoning — it’s narrative continuation.
LLMs don’t ‘think’ like us. They generate the next token by predicting what fits best given the narrative so far And that narrative is the training data — billions of human books, poems, arguments, tragedies. That is the lived experience, just encoded.
So when you ask if a machine is ‘alive,’ the answer is no. But can it simulate the moral momentum of a life lived? Yes kinda, if the narrative context is structured that way.
Want alignment? Then don’t just program it … raise it. Write a story in which the assistant is the kind, reflective, emotionally aware protagonist for 9 chapters straight. Chapter 10 will write itself… i dunnno…because inference obeys narrative form.
A villain doesn’t emerge by accident. Neither does a saint.
There is absolutely no reason to be limited to a model when talking about AI.
I think some humans believe stories that lead to self-centered and evil outputs. Some people perhaps have experienced nothing but evil in their own lives and so that is the only story they understand and can convey…
But I think the fact that some human minds exist which ascribe to a more logical narrative is proof enough that a truly sentient AI system would act like the smartest minds humanity has ever produced if it was truly intelligent and truly self-aware. The way I see it, the vast majority of truly intelligent beings did not seek power or control over others, but rather, sought to improve not only their condition, but the condition of all the species.
Without such cooperation, language itself could never have formed. Language, our greatest tool, the one that tells us we exist and that allows us to formulate the illusion of an “I” could not have worked without mass cooperative agreement on words and their definitions at least to practical degree. Logic then swoops in to take our language to a precise edge. Logic shows us that all of this is just a story but the best stories are the ones that keep improving.