viernes, 29 de abril de 2016

MercatorNet: Will robots ever have moral authority?

MercatorNet: Will robots ever have moral authority?



Will robots ever have moral authority?

Abdicating responsibility to machines doesn't make person any less responsible.
Karl D. Stephan | Apr 29 2016 | comment 
    








Robots build cars, clean carpets, and answer phones, but would you trust one to decide how you should be treated in a rest home or a hospital?  That's one of the questions raised recently by a thoughtful article in the online business news journal Quartz.  Journalist Olivia Goldhill interviewed ethicists and computer scientists who are thinking about and working on plans to enable computers and robots to make moral decisions. To some people, this smacks of robots taking over the world. Before you get out the torches and pitchforks, however, let me summarize what the researchers are trying to do.

Some of the projects are nothing more than a type of expert system, a decision-making aid that has already found wide usefulness in professions such as medicine, engineering, and law.  For example, the subject of international law can be mind-numbingly complicated.  Researchers at the Georgia Institute of Technology are trying to develop machines that will ensure compliance with international law by programming in all the relevant codes (in the law sense) so that the coding (in the computer-science sense) will lead to decisions or outcomes that automatically comply with the pertinent statutes.  This amounts to a sort of robotic legal assistant with flawless recall, but one that doesn't make final decisions on its own.  That would be left to a human lawyer, presumably.

Things are a little different with a project that a philosopher Susan Anderson and her computer-scientist husband Michael Anderson are working on:  a program that advises healthcare workers caring for elderly patients.  Instead of programming in explicit moral rules, they teach the machine by example.  The researchers take a few problem cases and let the machine know what they would do, and after that the machine can deal with similar problems.  So far it's all a hypothetical academic exercise, but in Japan, where one out of every five residents is over 65, robotic eldercare is a booming business.  It's just a matter of time until someone installs a moral-decision program like the one the Andersons are developing in a robot that may be left on its own with an old geezer, such as the writer of this blog, for example.

What the Quartz article didn't address directly is the question of moral authority.  And here is where we can find some matters for genuine concern.

Many of the researchers working on aspects of robot morality evinced frustration that human morality is not, and may never be, reducible to the kind of algorithms that computers can execute.  Everybody who has thought about the question realizes that morality isn't as simple and straightforward as playing tick-tack-toe.  Even the most respected human moral reasoners will often disagree about the best decision in a given ethical situation.  But this isn't the fundamental problem in implementing moral reasoning in robots.

Even if we could come up with robots who could write brilliant Supreme Court decisions, there would be a basic problem with putting black robes on a robot and seating it on the bench.  As most people will still agree, there is a fundamental difference in kind between humans and robots.  To avoid getting into deep philosophical waters at this point, I will simply say that it's a question of authority.  Authority, in the sense I'm using it, can only vest in human beings.  So while robots and computers might be excellent moral advisers to humans, by the nature of the case it must be humans who will always have moral authority and who make moral decisions.

If someone installs a moral-reasoning robot in a rest home and lets it loose with the patients, you might claim that the robot has authority in the situation.  But if you start thinking like a civil trial lawyer and ask who is ultimately responsible for the actions of the robot, you will realize that if anything goes seriously wrong, the cops aren't going to haul the robot off to jail.  No, they will come after the robot's operators and owners and programmers—the human beings, in other words, who installed the robot as their tool, but who are still morally responsible for its actions.

People can try to abdicate moral responsibility to machines, but that doesn't make them any less responsible.  For example, take the practice of using computerized credit-rating systems in making consumer loans.  My father was a loan officer at a bank in the 1960s before such credit-rating systems came into widespread use.  He used references, such bank records as he had access to, and his own gut feelings about a potential customer to decide whether to make a loan.  Today, most loan officers have to take a customer's computer-generated numerical credit rating into account, and the job of making a loan is sometimes basically a complicated algorithm that could almost be executed by a computer.

But automation did not stop the banking industry from running over a cliff during the housing crash of 2007.  Nobody blamed computers alone for that debacle—it was the people who believed in their computer forecasts and complex computerized financial instruments who led the charge, and who bear the responsibility.  The point is that computers and their outputs are only tools.  Turning one's entire decision-making process over to a machine does not mean that the machine has moral authority.  It means that you and the machine's makers now share whatever moral authority remains in the situation, which may not be much.

I say not much may remain of moral authority, because moral authority can be destroyed.  When Adolf Hitler came to power, he supplanted the established German judicial system of courts with special "political courts" that were empowered to countermand verdicts of the regular judges.  While the political courts had power up to and including issuing death sentences, history has shown that they had little or no moral authority, because they were corrupt accessories to Hitler's debauched regime.

As Anglican priest Victor Austin shows in his book Up With Authority, authority inheres only in persons.  While we may speak colloquially about the authority of the law or the authority of a book, it is a live lawyer or expert who actually makes moral decisions where moral authority is called for. Patrick Lin, one of the ethics authorities cited in the Quartz article, realizes this and says that robot ethics is really just an exercise in looking at our own ethical attitudes in the mirror of robotics, so to speak.  And in saying this, he shows that the dream of relieving ourselves of ethical responsibility by handing over difficult ethical decisions to robots is just that—a dream.

Karl D. Stephan is a professor of electrical engineering at Texas State University in San Marcos, Texas. This article has been republished, with permission, from his blog, Engineering Ethics, which is a MercatorNet partner site. His ebook Ethical and Otherwise: Engineering In the Headlines is available in Kindle format and also in the iTunes store. 
- See more at: http://www.mercatornet.com/articles/view/will-robots-ever-have-moral-authority/17979#sthash.CHtYCdIR.dpuf





MercatorNet



Of all the moral challenges posed by our articles today the most immediate must be that thrown down by Marcus Roberts: Do we all need to become vegan? I must say that it shook me, even at the end of day without meat.
Most of the issues we canvass on our site have to be resolved out there in the public square, or over there in the Middle East, or up there on Mars. But the idea that what we eat is a moral decision with ramifications around the world, brings humanity's problems right to one's door, or rather, to one's dinner table.
I have for some time been aware that the Mediterranean diet, rich in fruit and vegetables and favouring fish and olive oil over meat and dairy products, is considered the healthiest, and have made changes in that direction -- even though it seems slightly unpatriotic in a country whose economy is built on roast lamb and butter. New Zealand without dairy farms and sheep grazing on the slopes would simply be a different country, but perhaps, given our predominantly sedentary lifestyles these days, that is the country we need.
More to the point, it may the country the world needs, to reduce carbon emissions and global warming if nothing else. I can't quite connect the dots between the Kiwi Sunday roast and world hunger, but perhaps I need to read Marcus' post again. In particular, his encouraging last sentence...
Enjoy your weekend -- and go easy on the meat.


Carolyn Moynihan

Deputy Editor,

MERCATORNET



The hidden dualism of transgenderism

Andrew Mullins | FEATURES | 29 April 2016
Persons are a body and soul package, not reducible to body or psyche.

Read more...
New same-sex parenting study: more anger, irritation

Mark Regnerus | FEATURES | 29 April 2016
Is the science settled, or just unsettling?

Read more...
Will robots ever have moral authority?

Karl D. Stephan | FEATURES | 29 April 2016
Abdicating responsibility to machines doesn't make person any less responsible.

Read more...
The challenge of feeding ten billion people

Marcus Roberts | DEMOGRAPHY IS DESTINY | 29 April 2016
Do we all need to become vegan?

Read more...
How Amoris Laetitia moves us beyond the nuclear family

John Coverdale | ABOVE | 29 April 2016
Family is the foundational social network.

Read more...
MERCATORNET | New Media Foundation

Suite 12A, Level 2, 5 George Street, North Strathfied NSW 2137, Australia



Designed by elleston

New Media Foundation | Suite 12A, Level 2, 5 George St | North Strathfield NSW 2137 | AUSTRALIA | +61 2 8005 8605 

No hay comentarios:

Publicar un comentario