Shaping tomorrow's smart machines—Q&A with bioethicist Wendell Wallach

February 16, 2016 by Jim Shelton, Yale University
Shaping tomorrow's smart machines—Q&A with bioethicist Wendell Wallach

As intelligent machines continue to make their way into all sectors of society, a growing number of scientists, ethicists, policymakers, and business executives are converging on the idea that more thought must be given to underlying issues of machines and morality.

Already there are semi-autonomous technologies in use in military, manufacturing, , and service industry settings. We have cars that avoid collisions and drones that may deliver packages someday. The question now is: What guiding principles should be employed, as smarter devices begin to take a more prominent role in security, , and other complex matters?

Wendell Wallach, a lecturer at Yale's Interdisciplinary Center for Bioethics and chair of the center's technology and ethics study group, has explored the issue for more than a decade. He is the author of "Moral Machines: Teaching Robots Right From Wrong" and "A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control."

On Feb. 13, Wallach gave a presentation and press briefing on the topic of artificial intelligence (AI) at the American Association for the Advancement of Science's annual meeting, in Washington, D.C. During the press briefing, Wallach made three policy recommendations: directing 10% of AI/robotics research funding to studying and adapting to the societal impact of intelligent machines; creating an oversight and governance coordinating committee for AI/robotics; and a presidential order declaring that lethal autonomous weapons systems are in violation of international humanitarian law.

Wallach recently received a grant from inventor Elon Musk to develop a series of workshops with the Hastings Center, bringing together a variety of stakeholders around the topic of artificial intelligence and ethics.

YaleNews spoke with Wallach prior to his AAAS presentation.

Can we build machines that make moral decisions?

It is certainly possible to build machines that factor values and moral considerations into the choices and actions they take, particularly when they function within a very limited context. However, making explicit moral judgments in many different contexts depends upon a clear and full understanding of the situation at hand. This will require that intelligent machines have consciousness and other capabilities that AI researchers do not yet know how to implement within computers and robots.

How far has this technology advanced, just in the past few years?

Recent breakthroughs using a technique called "deep learning" have demonstrated solutions to long-standing roadblocks in machine perception and learning. These problems had stymied AI researchers for decades. But it is still unclear how far the new techniques will take researchers toward the holy grail of artificial intelligence and what other breakthroughs or roadblocks exist on the near horizon. In other words, there has been a great leap forward. Present-day systems can perform remarkable tasks. But smart machines are still quite primitive as far as demonstrating the intelligence and adaptive capabilities that wise, caring, and creative humans possess.

Which emerging technologies interest you the most?

Biotechnologies, AI/robotics, and neuroscience are of particular interest to me, and will all have a dramatic societal impact over the coming decades. I am also fascinated by technologies for mitigating the effects of global climate change (geoengineering), nanotechnologies, and approaches to develop new sources of energy.

CRISPR/Cas9, a new tool for quickly editing DNA, will alone facilitate altering the human genome and the ability to create new organisms and biological products. The benefits of CRISPR and other forms of synthetic biology, along with advances in AI, are truly transformative, but are also accompanied by serious risks and dangers. Addressing those risks, and managing and adapting to the societal impact of emerging technologies have been my primary focus.

What has our development of intelligent machines taught us about human decision-making processes and ethical systems?

Building intelligent machines has forced scholars to think comprehensively about the many skills and capabilities that come into play in making appropriate decisions, including, but not limited to: emotional intelligence, social skills, the ability to deduce the beliefs and intentions of others, having a body and being embodied in the world, the capacity to recognize the meaning of words and symbols, the capacity to discern essential from inessential information, and an aptitude to be sensitive to moral considerations. Reason alone is not sufficient to produce intelligent machines capable of acting appropriately in a world inhabited by other people, animals, and an environment worthy of care and consideration.

You've spent years advocating the need for public discussion about what decisions we want machines to make for us, and the principles guiding such decisions. Has there been enough discussion?

By no means! Indeed, the few scholars advocating for responsible innovation in AI/robotics have been in the wilderness until the reawakened concern about superintelligence emerging from recent breakthroughs using deep learning approaches. However, I am hopeful that we'll make significant progress over the next few years and decade toward shaping the development of AI. Nevertheless, serious questions about our ability to ensure that AI systems will be truly beneficial have yet to be answered. The public dialogue as to what we want and will accept in the development of smart machines has just begun.

What are the opportunities for making the world better?

I reject notions of inevitability, naïve techno-optimism, techno-pessimism, and simplistic techno-solutionism. Humanity needs to be vigilant if it wants to exact the benefits of technological possibilities while minimizing the harms. There are inflection points, windows of opportunity, where we can shape the trajectory of a new technology. A little adjustment early on can take us toward a very different destination. But these windows open and close very quickly. For example, there is an opportunity today to restrict the use of autonomous military weapons that make life and death decisions, but if we don't enact an international ban soon that opportunity will be lost. If there is no ban, the dangers in the development of AI will increase exponentially. Technological unemployment—the downward pressure new technologies, particularly robots, place on wage and job growth—is another area that requires attention now.

If human morality has an impact on intelligent machines, does it also work in reverse? Will machines have an impact on human values?

Machines are already having an impact on human values. Indeed, the very fact that we can create feeds into a scientific tendency to mechanize and pathologize human nature. On the other hand, the difficulty of developing machines capable of complex decision-making, and particularly moral decision-making, underscores what remarkable creatures we humans are.

Explore further: Intelligent robots threaten millions of jobs

Related Stories

Intelligent robots threaten millions of jobs

February 14, 2016

Advances in artificial intelligence will soon lead to robots that are capable of nearly everything humans do, threatening tens of millions of jobs in the coming 30 years, experts warned Saturday.

Cornell joins pleas for responsible AI research

August 27, 2015

The phrase "artificial intelligence" saturates Hollywood dramas – from computers taking over spaceships, to sentient robots overpowering humans. Though the real world is perhaps more boring than Hollywood, artificial intelligence ...

When machines can do any job, what will humans do?

February 13, 2016

Rice University computer scientist Moshe Vardi expects that within 30 years, machines will be capable of doing almost any job that a human can. In anticipation, he is asking his colleagues to consider the societal implications. ...

Recommended for you

Cryptocurrency rivals snap at Bitcoin's heels

January 14, 2018

Bitcoin may be the most famous cryptocurrency but, despite a dizzying rise, it's not the most lucrative one and far from alone in a universe that counts 1,400 rivals, and counting.

Top takeaways from Consumers Electronics Show

January 13, 2018

The 2018 Consumer Electronics Show, which concluded Friday in Las Vegas, drew some 4,000 exhibitors from dozens of countries and more than 170,000 attendees, showcased some of the latest from the technology world.

Finnish firm detects new Intel security flaw

January 12, 2018

A new security flaw has been found in Intel hardware which could enable hackers to access corporate laptops remotely, Finnish cybersecurity specialist F-Secure said on Friday.

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.