Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

Here's why we should build killer robots

terminator
getty

Last Monday, more than a thousand artificial-intelligence researchers cosigned an open letter urging the United Nations to ban the development and use of autonomous weapons.

Advertisement

Presented at the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, the letter features prominent researchers studying artificial intelligence including scientists such as Google director of research Peter Norvig alongside Tesla and SpaceX CEO Elon Musk and physicist Stephen Hawking. Since last Monday, more than 16,000 additional people signed the letter, according to The Guardian.

The letter says the development of autonomous weapons, or weapons that can target and fire without a human at the controls, could bring about a "third revolution in warfare," much as the creation of guns and nuclear bombs before it.

While killer robots sound terrifying, there are some real reasons that weapons powered by sophisticated AI may even be preferable to humans.

Autonomous weapons would take human soldiers out of the line of fire and potentially reduce the number of casualties in wars. Killer robots would be better soldiers all around — they're faster, more accurate, more powerful, and can take more physical damage than humans.

Advertisement

Stuart Russell, an AI researcher and the coauthor of "Artificial Intelligence: A Modern Approach," is a vocal advocate for the ban on autonomous weapons. Even as he fears that autonomous weapons could fall into the wrong hands, however, he acknowledges that there are some valid arguments for autonomous weapons.

"I've spent quite a long time thinking about what position I should take," Russell told Tech Insider. "They can be incredibly effective, they can have much faster reactions than humans, they can be much more accurate. They don't have bodies so they don't need life support ... I think those are the primary reasons various militaries, not just the UK but the US, are doing this."

Autonomous weapons wouldn't become afraid, freeze up, or lose their tempers. They can do their jobs without allowing their emotions to color their actions. IEEE Spectrum's Evan Ackerman wrote that autonomous weapons could be programmed to follow the rules of engagement and other laws that govern war.

"If a hostile target has a weapon and that weapon is pointed at you, you can engage before the weapon is fired rather than after in the interests of self-protection," he wrote. "Robots could be even more cautious than this, you could program them to not engage a hostile target with deadly force unless they confirm with whatever level of certainty that you want that the target is actively engaging them already."

Advertisement

Robot ethicist Sean Welsh echoes this idea in The Conversation, where he writes that killer robots would be "completely focused on the strictures of International Humanitarian Law and thus, in theory, preferable even to human war fighters who may panic, seek revenge, or just plain [mess] stuff up."

Ackerman suggests doing away with the misconception that technology is either "inherently good or bad" and focus rather on how it is used. He suggests coming up with a way to make "autonomous armed robots ethical."

"Any technology can be used for evil," Ackerman wrote. "Banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil: we'd need a much bigger petition for that."

Heather Roff, a contributor to the open letter and a professor at the University of Denver's Josef Korbel School of International Studies, wouldn't disagree with him.

Advertisement

"The United States military doesn't want to give up smart bombs," Roff told Tech Insider. "I, frankly, probably wouldn't want them to give that up. Those are very discriminate weapons. However, we do want to limit different weapons going forward [that] have no meaningful human control."

And it's likely this recent public outcry may not be enough to stop an international war machine that is already building semi-autonomous weapons to identify and aim at targets by themselves. Many, such as the Australian navy's anti-missile and close-in weapons systems, attract no scrutiny or objections.

"Why? Because they're employed far out at sea and only in cases where an object is approaching in a hostile fashion," defense researcher Jai Gaillot wrote for The Conversation. "That is, they're employed only in environments and contexts whereby the risk of killing an innocent civilian is virtually nil, much less than in regular combat."

And, as Ackerman writes, it may be impossible to stop the tank now that it's rolling, and "barriers keeping people from developing this kind of systems are just too low."

Artificial Intelligence Elon Musk
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account