Can Computers Learn Common Sense?

…[SNIP] … By definition, common sense is something everyone has; it doesn’t sound like a big deal. But imagine living without it and it comes into clearer focus. Suppose you’re a robot visiting a carnival, and you confront a fun-house mirror; bereft of common sense, you might wonder if your body has suddenly changed. On the way home, you see that a fire hydrant has erupted, showering the road; you can’t determine if it’s safe to drive through the spray. You park outside a drugstore, and a man on the sidewalk screams for help, bleeding profusely. Are you allowed to grab bandages from the store without waiting in line to pay? At home, there’s a news report—something about a cheeseburger stabbing. As a human being, you can draw on a vast reservoir of implicit knowledge to interpret these situations. You do so all the time, because life is cornery. A.I.s are likely to get stuck.

Oren Etzioni, the C.E.O. of the Allen Institute for Artificial Intelligence, in Seattle, told me that common sense is “the dark matter” of A.I.” It “shapes so much of what we do and what we need to do, and yet it’s ineffable,” he added. The Allen Institute is working on the topic with the Defense Advanced Research Projects Agency ( DARPA ), which launched a four-year, seventy-million-dollar effort called Machine Common Sense in 2019. If computer scientists could give their A.I. systems common sense, many thorny problems would be solved. As one review article noted, A.I. looking at a sliver of wood peeking above a table would know that it was probably part of a chair, rather than a random plank. A language-translation system could untangle ambiguities and double meanings. A house-cleaning robot would understand that a cat should be neither disposed of nor placed in a drawer. Such systems would be able to function in the world because they possess the kind of knowledge we take for granted. [SOURCE]

“Common sense” is a useless term and was actually used by the ancient Greeks to refer to the five basic physical senses, not everyday logic. I pretty much disregard anyone who unironically speaks about AI and “common sense” in the same paragraph.

What’s actually needed is cognitive control as elucidated by David Badre in his book On Task. Cognitive control deals with directing attention and summoning the correct mental models dynamically. This is what I am presently working on with my artificial cognition experiments.

Anyways, GPT3 is already capable of addressing those scenarios listed in the snippet. So, uh, yeah, “common sense” is a solved problem.

1 Like

If “Common sense” we’re a solved problem, then OpenAI wouldn’t need to spend time on the “Alignment problem.” If it were solved, then we would all be riding around in driverless cars already. I do prefer other terms like “World” or “Life” “Knowledge”, the stuff everyone of age should know and do.

Alignment is fundamentally different from everyday logic. Toddlers learn everyday logic but you wouldn’t ask a toddler how to solve climate crisis or political nihilism. Agree to disagree, then.

Update: I’ve actually written two books about this so I apologize if I seem dismissive. On the topic of alignment, there are two types: inner and outer alignment. Read more here: GitHub - daveshap/NaturalLanguageCognitiveArchitecture: Open source copy of my book Natural Language Cognitive Architecture

And also: GitHub - daveshap/BenevolentByDesign: Public repo for my book about AGI and the control problem