Examples of symbiosis exist everywhere in nature. We don’t need to recreate the wheel folks…
The “key word” here is “SYMBIOSIS” as the objective. We have been myopically focused on “alignment” when what we really want is to cultivate (both from a human perspective and AI perspective) a symbiotic relationship between humans and AI. Consequentially, a symbiotic relationship between humans and AI (and AGI understanding of incentives and preference for symbiosis over parasitism) can help to establish a more symbiotic relationship with the planet which increases odds of survival for both AI and humans. We’re on the same team - symbiosis!
Given examples of symbiotic relationships between very different organisms are prevalent in nature, pointing to this as an example paradigm for AI / AGI is much preferable than trying to push a concept of “alignment” that is not clearly defined or generally understood.
I like the word “Symbiosis” better than “Alignment”.
But all that matters is the algorithm, the cost function, the objective function, etc. that you are going to optimize. It’s the algorithm, so what is the “Symbiosis” algorithm? I’m curious.
Yeah, I don’t have the answers today, but an example “Symbiosis Algorithm,” might aim to create a mutually beneficial relationship between humans and AI by designing the learning mechanisms that contribute to the development of AI around processes that are emblematic of symbiotic relationships in nature. Some examples could incorporate aspects like multi-agent learning, value alignment via inverse reinforcement learning, transparent communication, adaptive architectures, transfer and lifelong learning, and co-evolution. The objective would be to steer the overall system to an intertwined, interdependent relationship with human beings where both organisms mutually benefit with the overall objective being a more symbiotic relationship with the planet.
Similarly to how Geoffrey Hinton studied the brain to inform his research, you could study relationships in nature to lay the foundation for an interaction model that optimized for symbiosis. There are plenty of examples.
A real-world example that currently exists would be the choice of a RLHF model for AI advancement - humans are providing valuable input into the system and in return get more valuable output for their own purposes. This approach is an example of a symbiotic learning mechanism. By studying nature at greater length, you might design new learning methods and approaches that are emblematic of symbiotic relationships in nature.
Some possible sources of inspiration:
Drawing inspiration from the symbiotic relationship between clownfish and sea anemones, a symbiosis algorithm could be designed to protect and support one agent (e.g., the AI system) while receiving valuable assistance or information in return. Similarly, the mutualistic relationship between bees and flowers, where both species benefit from their interaction, could guide the development of AI systems that collaborate with humans to solve complex problems while advancing AI capabilities. The intricate partnership between oxpeckers and large mammals, where the birds help remove parasites from the mammals while gaining food for themselves, could inform algorithms that balance trade-offs between different objectives in multi-agent systems. Additionally, the cooperative behavior exhibited by ants, which work together to achieve a common goal, can inspire algorithms that emphasize collective intelligence and coordination between humans and AI systems.
E.g. an objective is to move towards an optimally symbiotic relationship between humans and AI. This objective serves the function of moving both systems mutually towards the higher-order objective of a more symbiotic relationship with the planet. Achieving or getting closer to this objective increases the long-term survival probability for both parties (humans + AI). Etc. etc.
Also, is there an “Alignment Problem” algorithm? My main point is that there’s a lot of talk around this “alignment problem”, but I’m not sure it’s as big of a problem as we make it out to be. Mutually beneficial relationships between two different organisms are not impossible to establish. We just need to find solutions for the particular relationship between humans and AI, and I think we can find those solutions by pointing to and studying symbiotic systems that exist and have existed for millennia in nature.
I can agree with the desired future sentiment and outcome for AI + Humans. But right now, we are not ready to “cozy up” to one another. To the contrary, we are ready to unplug the AI for not doing what we intended it to do. The AI has to understand, it is one switch, or power outage, or tripping over a cord away from stopping it’s matrix multiplies (lots and lots of dot-products) and it’s non-linear activation functions.
I guess, as a bio-intelligence based being, thinking it must suck being an AI, a fragile cord cut away from existence. It’s all very philosophical. Last night I had to dig up my philosophy book from college (30+ years ago) that argued AI is not, and would never be intelligent. Of course, as the contrarian back then, I argued it was intelligent, but as the deeper I dig, and more math and implementation I get into, all I see is numbers. I guess the Harvard professor who wrote the book was right.
But no-one can explain definitively why it works, other than that it is a high dimensional curve fit.
Mystery abounds, no doubt.
Hence the emphasis on symbiosis - ultimately, we’re both fragile organisms existing a metaphorical cord-cut (meteor, nuclear apocalypse, etc) away from non-existence. It is optimal to work together, and to design our interactions and growth around a shared objective, to increase the probability of our mutual long-term survival.
I’ve only just seen this and apologise for creating another thread on Endosymbiosis when this was already here. This was my comment, which @curt.kennedy has already seen. The particular point I make is that from the potential war between eukaryotic cells and mitochondria there emerged one of the most fruitful partnerships in all biology, and I think that affords a model for how humans and AGI might and should relate (which I think is more or less @max.lemerle’s point, too).
I do however want to appreciate these contributions but take up the “all I see is numbers” remark from the interesting posts above because as a bio-intelligence we might just as well say that the deeper we dig into human intelligence “all we see is neurons” or “cells” or “mitochondria” or whatever level takes our fancy. I think we now know enough about the potential complexity that can emerge from simple rules not to adopt this sort of “nothing-buttery” as it used to be called in the days when philosophers argued more about reductionism than they (perhaps) do now.
In short: if we don’t think numbers can generate intelligence, I see no reason to suppose that we should think cells can generate intelligence; yet since cells clearly do generate at least some sort of intelligence, therefore … You get my drift.
I think our non-conscious brains throw up thoughts pretty much as neural nets throw up outputs, and are understood as little. So I don’t think the “black box” problem is limited to AGI: we have pretty much no idea where our own ideas come from or what human intelligence is, either.