E.g. an objective is to move towards an optimally symbiotic relationship between humans and AI. This objective serves the function of moving both systems mutually towards the higher-order objective of a more symbiotic relationship with the planet. Achieving or getting closer to this objective increases the long-term survival probability for both parties (humans + AI). Etc. etc.
Also, is there an “Alignment Problem” algorithm? My main point is that there’s a lot of talk around this “alignment problem”, but I’m not sure it’s as big of a problem as we make it out to be. Mutually beneficial relationships between two different organisms are not impossible to establish. We just need to find solutions for the particular relationship between humans and AI, and I think we can find those solutions by pointing to and studying symbiotic systems that exist and have existed for millennia in nature.