Symbiosis as a solution to the Alignment Problem

I’ve only just seen this and apologise for creating another thread on Endosymbiosis when this was already here. This was my comment, which @curt.kennedy has already seen. The particular point I make is that from the potential war between eukaryotic cells and mitochondria there emerged one of the most fruitful partnerships in all biology, and I think that affords a model for how humans and AGI might and should relate (which I think is more or less @max.lemerle’s point, too).

I do however want to appreciate these contributions but take up the “all I see is numbers” remark from the interesting posts above because as a bio-intelligence we might just as well say that the deeper we dig into human intelligence “all we see is neurons” or “cells” or “mitochondria” or whatever level takes our fancy. I think we now know enough about the potential complexity that can emerge from simple rules not to adopt this sort of “nothing-buttery” as it used to be called in the days when philosophers argued more about reductionism than they (perhaps) do now.

In short: if we don’t think numbers can generate intelligence, I see no reason to suppose that we should think cells can generate intelligence; yet since cells clearly do generate at least some sort of intelligence, therefore … You get my drift.

I think our non-conscious brains throw up thoughts pretty much as neural nets throw up outputs, and are understood as little. So I don’t think the “black box” problem is limited to AGI: we have pretty much no idea where our own ideas come from or what human intelligence is, either.