With the advent of AGI in the not too distant future, I’ve been hard at work on my concept of the “Core Objective Functions”. These are a set of heuristic imperatives that, I believe, any AGI should abide by. This is my proposed solution to the Control Problem. Once we create a machine that can think on its own, improve itself, and grow beyond our control, we will want to ensure that it continues to be a benevolent force for us humans (and the rest of the planet as well).
You can see my last post about the Core Objective Functions here. The Core Objective Functions were also mentioned in my first book, NLCA. Lastly, I’ve got a public repo where I’m accumulating my experiences here.
What I need: alpha readers and beta readers.
Alpha readers will look at the first drafts, and let me know if the structure of the book makes sense, if they get lost anywhere (if I need more explanation), and so on.
Beta readers will look at later drafts, and comment on aspects such as clarity and typos.
Let me know if you’re interested in participating.