Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

An ex-Pentagon official thinks 'killer robots' need to be stopped

terminator genisys concept art seattle
Courtesy of Paramount Pictures

The dystopian war between robots and humans of the "Terminator" films is probably not going to happen, but there is still reason to worry about so-called "killer robots."

Advertisement

A new report by Paul Scharre of the Center for a New American Security argues that, while militaries develop semi- and fully-autonomous weapons systems such as missiles and drone aircraft, they are facing "potentially catastrophic consequences" if human controllers are taken out of the loop.

"Anyone who has ever been frustrated with an automated telephone call support helpline, an alarm clock mistakenly set to ‘p.m.’ instead of ‘a.m.,’ or any of the countless frustrations that come with interacting with computers, has experienced the problem of ‘brittleness’ that plagues automated systems," Scharre, a former Army Ranger who helped draft policies related to autonomous weapons systems for the Pentagon, writes in the report.

His main point: Automated systems can be really useful, but they are limited by their programming, and lack the use of "common sense" that a human may employ in certain cases.

Such was the case in 1983, when the human skepticism of Stanislav Petrov, a Russian military officer, was the biggest safety in stopping the Soviet Union from launching its missiles, after an early-warning system reported five incoming missiles from the United States.

Advertisement

It was a computer error. A fully-automated system had the data and would have launched. But Petrov rightly believed the system was malfunctioning.

"What would an autonomous system have done if it was in the same situation as Stanislav Petrov found himself on September 26, 1983? Whatever it was programmed to do."

Software error, cannot compute. Launch missile?

Scharre evaluates a number of past failures — involving humans and automated systems — to illustrate his point. While humans were in the loop during the disasters at Three Mile Island or Fukushima, these rare accidents expose the problem with tightly-controlled systems.

In the case of Fukushima for instance, many of the safety features activated by loss of power, flooding, and earthquakes worked as designed, but the engineers did not account for the possibility that all three of these things could happen at the same time. 

Advertisement
Fukushima
A Tokyo Electric Power Co. (TEPCO) employee wearing a protective suit and mask uses a survey meter near storage tanks for radioactive water in the H4 area where radioactive water leaked from a storage tank in August, at the tsunami-crippled TEPCO's Fukushima Daiichi nuclear power plant in Fukushima prefecture November 7, 2013. REUTERS/Kimimasa Mayama/Pool

Engineers may be able to hypothesize and program the machine's response to nightmare scenarios but on a "long enough time horizon," Scharre writes, "Unanticipated system interactions are inevitable."

When something unanticipated happens to a computer that isn't programmed to deal with it, plenty of bad stuff can happen. Most computer users know the famous "blue screen of death" error and constantly update their software to fix bugs, and security problems are often found in systems after they have been exploited by hackers.

"Without a human in the loop to act as a fail-safe, the consequences of failure with an autonomous weapon could be far more severe than an equivalent semi-autonomous weapon," he writes.

Scharre advocates a similar framework for humans and machines to work together, called "centaur warfighting." It's based on Gary Kasparov's model of "centaur," or advanced chess — in which an artificially-intelligent machine helps the Chessmaster think smarter about his next move.

Advertisement

"The best chess players in the world are human-machine teams," he writes.

 

Artificial Intelligence Military
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account