NEWS

Military searching for ways to replace man with machines

John Markoff N.Y. Times News Service
An armed robot, called Maars, performs during a demonstration at the Robotics Rodeo at Fort Benning, Ga., in October.

FORT BENNING, Ga. | War would be a lot safer, the Army says, if only more of it were fought by robots.

And while smart machines are already very much a part of modern warfare, the Army and its contractors are eager to add more. New robots — none of them particularly human-looking — are being designed to handle a broader range of tasks, from picking off snipers to serving as indefatigable night sentries.

In a mock city used by ArmyRangers for urban combat training, a 15-inch robot with a video camera scuttles around a bomb factory on a spying mission. Overhead, an almost silent drone aircraft with a 4-foot wingspan transmits images of the buildings below. Onto the scene rolls a sinister-looking vehicle on tank treads, about the size of a riding lawn mower, equipped with a machine gun and a grenade launcher.

Three backpack-clad technicians, standing out of the line of fire, operate the three robots with wireless video-game-style controllers. One swivels the video camera on the armed robot until it spots a sniper on a rooftop. The machine gun pirouettes, points and fires in two rapid bursts. Had the bullets been real, the target would have been destroyed.

The machines, viewed at a “Robotics Rodeo” last month at the Army’s training school, not only protect soldiers, but also are never distracted, using an unblinking digital eye, or “persistent stare,” that automatically detects even the smallest motion. Nor do they ever panic under fire.

“One of the great arguments for armed robots is they can fire second,” said Joseph W. Dyer, a former vice admiral and the chief operating officer of iRobot, which makes robots that clear explosives, as well as the Roomba robot vacuum cleaner. When a robot looks around a battlefield, he said, the technician who is seeing through its eyes can take time to assess a scene without firing in haste at an innocent person.

Yet the idea that robots on wheels or legs, with sensors and guns, might someday replace or supplement human soldiers is still a source of extreme controversy. Because robots can stage attacks with little immediate risk to the people who operate them, opponents say that robot warriors lower the barriers to warfare, potentially making nations more trigger-happy and leading to a new technological arms race.

“Wars will be started very easily and with minimal costs” as automation increases, predicted Wendell Wallach, a scholar at the Yale Interdisciplinary Center for Bioethics and chairman of its technology and ethics study group.

Civilians will be at greater risk, people in Wallach’s camp argue, because of the challenges in distinguishing between fighters and innocent bystanders. That job is maddeningly difficult for human beings on the ground. It only becomes more difficult when a device is remotely operated.

This problem has already arisen with Predator aircraft, which find their targets with the aid of soldiers on the ground but are operated from the United States. Because civilians have died as a result of collateral damage or mistaken identities, Predators have generated opposition.

But robot combatants are supported by a range of military strategists, officers and weapons designers — and even some human rights advocates.

“A lot of people fear artificial intelligence,” said John Arquilla, executive director of the Information Operations Center at the Naval Postgraduate School. “I will stand my artificial intelligence against your human any day of the week and tell you that my AI will pay more attention to the rules of engagement and create fewer ethical lapses than a human force.”

His faith in machines is already being tested.

“Some of us think that the right organizational structure for the future is one that skillfully blends humans and intelligent machines,” Arquilla said. “We think that that’s the key to the mastery of 21st-century military affairs.”

Automation has proved vital in the wars America is fighting. In the air in Iraq and Afghanistan, unmanned aircraft with names like Predator, Reaper, Raven and Global Hawk have kept soldiers safer. Moreover, the military routinely uses more than 6,000 tele-operated robots to search vehicles at checkpoints as well as to disarm one of the enemies’ most effective weapons: the IED, or improvised explosive device.

Yet the shift to automated warfare may offer only a fleeting strategic advantage to the United States. Fifty-six nations are developing robotic weapons, said Ron Arkin, a Georgia Institute of Technology roboticist and a government-financed researcher who has argued that it is possible to design “ethical” robots that conform to the laws of war and the military rules of escalation.

But the ethical issues are far from simple. Last month in Germany, an international group including artificial intelligence researchers, arms control specialists, human rights advocates and government officials called for agreements to limit the development and use of tele-operated and autonomous weapons.

The group, known as the International Committee for Robot Arms Control, said warfare was accelerated by automated systems, undermining the capacity of human beings to make responsible decisions. For example, a gun that was designed to function without humans could shoot an attacker more quickly and without a soldier’s consideration of subtle factors on the battlefield.

“The short-term benefits being derived from roboticizing aspects of warfare are likely to be far outweighed by the long-term consequences,” said Wallach, the Yale scholar, suggesting that wars would occur more readily and that a technological arms race would develop.

As the debate continues, so do the Army’s automation efforts. In 2001, Congress gave the Pentagon the goal of making one-third of the ground combat vehicles remotely operated by 2015. That seems unlikely, but there have been significant steps in that direction.

Such programs represent a resurgence in the development of autonomous systems in the wake of costly failures and the cancellation of the Army’s most ambitious such program in 2009. That program was once estimated to cost more than $300 billion and was expected to provide the Army with an array of manned and unmanned vehicles linked by a futuristic information network.

Military technologists assert that tele-operated, semi-autonomous and autonomous robots are the best way to protect the lives of American troops.

Army Special Forces units have bought six lawn-mower-size robots for classified missions, and the National Guard has asked for dozens more to serve as sentries on bases in Iraq and Afghanistan. These units are known as the Modular Advanced Armed Robotic System, or MAARS, and they are made by a company called QinetiQ North America.

Remotely controlled systems like the Predator aircraft and MAARS move a step closer to concerns about the automation of warfare. What happens, ask skeptics, when humans are taken out of decision making on firing weapons? Despite the insistence of military officers that a human’s finger will always remain on the trigger, the speed of combat is quickly becoming too fast for human decision makers.

“If the decisions are being made by a human being who has eyes on the target, whether he is sitting in a tank or miles away, the main safeguard is still there,” said Tom Malinowski, Washington director for Human Rights Watch, which tracks war crimes. “What happens when you automate the decision? Proponents are saying that their systems are win-win, but that doesn’t reassure me.”