Re: Moral Machines

Wendell Wallach and Colin Allen raise some interesting questions in Moral Machines. Who is responsible when a driverless train runs amuck? Or when an automated medical system prescribes the wrong drug, or fail to detect drug interactions?  Why do we react to a robot displaying emotions as if it could feel them? Does a bomb detecting robot deserve our love?

The sections going over basic ethics and morality raises some intriguing possibilities. Maybe long-lived entities would have incentive to be kinder to others, as they would be more likely to be subject to the ill consequences of their actions.  Maybe to be moral, a machine needs to fear punishment. (But is it moral to build a machine that fears being turned off?)  Maybe a computer (or anyone) needs to be omniscent in order to make good moral decisions.

But as when the book tries to address the question of whether machines will ever become moral agents, it gets annoying.

My first annoyance came from the constantly used acronym AMA, Artificial Moral Agents. The definition is buried in the introduction without benefit of capitalization, and impossible to find in the index. The second annoyance is how often it cites Asimov’s Three Laws, which are better at creating dramatic dilemmas than providing guidance. The third and biggest annoyance is how much of the book is preoccupied with artificial intelligence.

After asserting that free will and consciousness are necessary in a moral agent, the book dives into the various blind allies of AI research. Most of the research described is tangled up in trying to achieve consciousness and social awareness rather than addressing the issue of whether it’s moral to build such machines, or who is responsible if a ill-acting machine is built without considering the consequences. As the authors noted, far more people in AI are working on consciousness than on morality.

This book is better at making the case for why we need a ethical system that includes mechanical actors than it does at providing such a system. This failure reminds me of how The View From the Center of the Universe attempted to provide a mythos based on science. But then that’s not really all surprising considering that human ethical systems have failed to provide anything better than guidelines. When it comes to assessing moral dilemmas, even experts take them on a case by case basis; furthermore, if you ask enough people, someone will make the case for taking each side possible.

Don’t look here for any answers, only reminders of what questions need to be asked.

Advertisement