Monday, June 4, 2012

RoboEthics

We, as a species, are always telling ourselves 'stories' about how we relate to the rest of life on Earth. Most of these stories get rehashed and repackaged through the ages, so one is familiar with the stories even if one disagrees with them. One of the main stories that humans are telling themselves in the current epoch is: we are the bringers forth of the technology which facilitates evolution, the bringers forth of the Earth's technological advance. In order to do this we need to impart a certain amount of creative destruction - destroy in order to save. The Economist has this to say: 


In the classic science-fiction film “2001”, the ship’s computer (HAL), faces a dilemma. His instructions require him both to fulfill the ship’s mission (investigating an artifact near Jupiter) and to keep the mission’s true purpose secret from the ship’s crew. To resolve the contradiction, he tries to kill the crew. As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world. Society needs to find ways to ensure that they are better equipped to make moral judgments than HAL was.

Military technology, unsurprisingly, is at the forefront of the march towards self-determining machines. Its evolution is producing an extraordinary variety of species. There is a flying surveillance drone the weight of a wedding ring, and one that carries 2.7 tons of bombs. Robots are spreading in the civilian world, too, from the flight deck to the operating theatre. Passenger aircraft have long been able to land themselves. Driverless trains are commonplace. Fully self-driving vehicles are being tested around the world; Google’s driverless cars digitally map neighborhoods.

As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming - or at least appearing to assume - moral agency. Weapons systems currently have human operators “in the loop”; but as they grow more sophisticated, it will be possible to shift to “on the loop” operation with machines carrying out orders autonomously.

As that happens, they will be presented with ethical dilemmas. Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? Such questions have led to the emergence of the field of “machine ethics”, which aims to give machines the ability to make such choices appropriately - in other words, to tell right from wrong.

One way of dealing with these difficult questions is to avoid them altogether, by banning autonomous robots. But, autonomous robots could do much more good than harm. Instead, society needs to develop ways of dealing with the ethics of robotics - and get going fast. The best known set of guidelines for robo-ethics are the “three laws of robotics” coined by Isaac Asimov, a science-fiction writer. The laws require robots to protect humans, obey orders and preserve themselves - in that order. Unfortunately, the laws are of little use in the real world.

Regulating the development and use of autonomous robots will require a rather more elaborate framework. Progress is needed in three areas in particular: (i) In order to allocate responsibility, autonomous systems must keep detailed logs so that they can explain the reasoning behind their decisions when necessary, (ii) where ethical systems are embedded into robots, the judgments they make need to be ones that seem right to most people and (iii) more collaboration to draw up new rules for robots which are left to their own devices.

Both ethicists and engineers stand to benefit from working together: ethicists may gain a greater understanding of their field by trying to teach ethics to machines, and engineers need to reassure society that they are not taking any ethical short-cuts. Technology has driven mankind’s progress, but each new advance has posed troubling new questions. Autonomous machines are no different. The sooner the questions of moral agency they raise are answered, the easier it will be for mankind to enjoy the benefits that they will undoubtedly bring.

No comments: