Structure-mapping engine enables computers to reason and learn like humans, including solving moral dilemmas

June 21, 2016 by Amanda Morris, Northwestern University
Credit: Public Domain

Northwestern University's Ken Forbus is closing the gap between humans and machines.

Using cognitive science theories, Forbus and his collaborators have developed a model that could give computers the ability to reason more like humans and even make moral decisions. Called the structure-mapping engine (SME), the new model is capable of analogical problem solving, including capturing the way humans spontaneously use analogies between situations to solve .

"In terms of thinking like humans, analogies are where it's at," said Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science in Northwestern's McCormick School of Engineering. "Humans use relational statements fluidly to describe things, solve problems, indicate causality, and weigh moral dilemmas."

The theory underlying the model is psychologist Dedre Gentner's structure-mapping theory of analogy and similarity, which has been used to explain and predict many psychology phenomena. Structure-mapping argues that analogy and similarity involve comparisons between relational representations, which connect entities and ideas, for example, that a clock is above a door or that pressure differences cause water to flow.

Analogies can be complex (electricity flows like water) or simple (his new cell phone is very similar to his old phone). Previous models of analogy, including prior versions of SME, have not been able to scale to the size of representations that people tend to use. Forbus's new version of SME can handle the size and complexity of relational representations that are needed for visual reasoning, cracking textbook problems, and solving moral dilemmas.

"Relational ability is the key to higher-order cognition," said Gentner, Alice Gabrielle Twight Professor in Northwestern's Weinberg College of Arts and Sciences. "Although we share this ability with a few other species, humans greatly exceed other species in ability to represent and reason with relations."

Supported by the Office of Naval Research, Defense Advanced Research Projects Agency, and Air Force Office of Scientific Research, Forbus and Gentner's research is described in the June 20 issue of the journal Cognitive Science. Andrew Lovett, a postdoctoral fellow in Gentner's laboratory, and Ronald Ferguson, a PhD graduate from Forbus's laboratory, also authored the paper.

Many artificial intelligence systems—like Google's AlphaGo—rely on deep learning, a process in which a computer learns examining massive amounts of data. By contrast, people—and SME-based systems—often learn successfully from far fewer examples. In moral decision-making, for example, a handful of stories suffices to enable an SME-based system to learn to make decisions as people do in psychological experiments.

"Given a new situation, the machine will try to retrieve one of its prior stories, looking for analogous sacred values, and decide accordingly," Forbus said.

SME has also been used to learn to solve physics problems from the Advanced Placement test, with a program being trained and tested by the Educational Testing Service. As further demonstration of the flexibility of SME, it also has been used to model multiple visual problem-solving tasks.

To encourage research on analogy, Forbus's team is releasing the SME source code and a 5,000-example corpus, which includes comparisons drawn from visual problem solving, textbook problem solving, and moral decision making.

The range of tasks successfully tackled by SME-based systems suggests that analogy might lead to a new technology for artificial intelligence systems as well as a deeper understanding of human cognition. For example, using analogy to build models by refining stories from multiple cultures that encode their moral beliefs could provide new tools for social science. Analogy-based techniques could be valuable across a range of applications, including security, health care, and education.

"SME is already being used in educational software, providing feedback to students by comparing their work with a teacher's solution," Forbus said. But there is a vast untapped potential for building software tutors that use to help students learn."

Explore further: Babies can think before they can speak

Related Stories

Babies can think before they can speak

May 26, 2015

Two pennies can be considered the same—both are pennies, just as two elephants can be considered the same, as both are elephants. Despite the vast difference between pennies and elephants, we easily notice the common relation ...

Teaching robots right from wrong

May 9, 2014

Researchers from Tufts University, Brown University, and Rensselaer Polytechnic Institute are teaming with the U.S. Navy to explore technology that would pave the way for developing robots capable of making moral decisions.

Beyond Asimov: how to plan for ethical robots

June 2, 2016

As robots become integrated into society more widely, we need to be sure they'll behave well among us. In 1942, science fiction writer Isaac Asimov attempted to lay out a philosophical and moral framework for ensuring robots ...

Monkeys also reason through analogy

September 26, 2011

Recognizing relations between relations is what analogy is all about. What lies behind this ability? Is it uniquely human? A study carried out by Joël Fagot of the Laboratoire de Psychologie Cognitive (France) and Roger ...

Recommended for you

HSBC, ING banks announce blockchain first

May 14, 2018

Banking giants HSBC and ING on Monday said they had carried out a landmark blockchain transaction aimed at speeding up payment processes and making them more secure.

3 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Whydening Gyre
1 / 5 (1) Jun 21, 2016
This is interesting work, indeed...
And common sense relatable....
nilbud
3 / 5 (2) Jun 22, 2016
It can probably be as moral as an american but that's not enough.
kochevnik
5 / 5 (1) Jun 22, 2016
If Tay is any hint of the future AI moralist, this project is an omen

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.