What if drones and killer robots were able to autonomously target and kill people?
Killer robots once seemed a science-fiction fantasy, but technological advancements have propelled them into the realm of imminent reality.
A killer robot is one that can autonomously target and kill people with little or no human control.
Governments, scientists and tech leaders have been struggling with basic questions about deadly robots for years. Military proponents suggest that killer robots could provide advantages that include:
- Improved targeting that is driven by AI and algorithms and could reduce the proportion of innocent people killed in military engagements
- Saving the lives of troops and police by putting the robot in the line of fire
- Ways to program the robots to embed the ethical rules of its owners (.pdf)
Those against deadly robots include some of the world’s most prominent tech and scientific minds, including Stephen Hawkings and elon Musk, who argue in an open letter about autonomous weapons:
- An AI killer robot arms race could emerge across the world
- Their relatively low cost could make them ubiquitous
- Terrorists and dictators will eventually get hold of them
- Autonomous weapons could be used for assignations, subduing populations, or selectively killing ethnic populations
There Are No Laws Against Killer Robots
Back in 1942 the writer Issac Asimov famously wrote a fictional set of three laws for robots, requiring that they never injure a human being. In reality there are no laws today that prevent the rise of killer robots.
Biological weapons and space-based weapons have been banned and treaties enforce
So far the UN has not called for a ban on killer robots, and no country have forsworn their development or eventual use.,
for some time. especially as Artificial Intelligence and machine learning for some time,
Unlike global treaties that prevent weapons in space, for instance, there are no agreements that prevent robots from killing humans.
When Will They Be Here?
Rapid technological advancements are making the arrival of killer robots imminent, according to experts. For instance
- Armed drones, which today have human control over killing decisions, could become autonomous with new software
- AI and machine technology learning are and autonomous vehicles of all types facial recognition
- Robot technology is becoming more available and is less expensive than ever before
Precedents are out there
the first instance of a robot being used to kill was in 2016 in Dallas, Texas, when police rigged a robot with a bomb to kill a sniper who had gunned down 11 police officers, killing five of them.
In the Dallas case, the technology behind the killer robot was simple: a remote-controlled robot rigged with a remote-controlled bomb. But more advanced killer robots already exist, some of which can automatically locate human targets and make a decision of whether or not to kill them.
Military Killer Robots
Some robots are essentially military systems that target other military systems. For instance, the U.S. Aegis and Patriot missile defense systems are both able to target and fire at missiles or aircraft by themselves. Israel’s Harpy drone is capable of locating and launching itself like a missile at radar installations.
On the seas, the US navy has developed The Sea Hunter, an unmanned drone ship – a robot if you will- that autonomously hunts submarines. The sub hunter, and other US experiments to develop robots, drone ships, and other systems, represents what some say is an emerging US strategy to exploit technology to maintain military superiority.
Other countries are developing lethal robotics too. In South Korea, SGR-1 turret guns in the De-Militarized Zone have the capability to target and fire at enemy soldiers – although it currently needs human approval to shoot.
But perhaps the most startling killer robot ever is a drone submarine that Russia is planning to develop.Plans for the sub, which were unceremoniously unveiled by Russia, call for it to carry a large nuclear weapon as it sails autonomously to coastal cities and military targets before destroying them with blasts and tsunamis.
Fear of a robotic arms race
The future seems primed for a transition into more automated weaponry, including killer robots.
Rapid, private-sector advances in computing power, big data, artificial intelligence (AI) , sensors, miniaturization, robotics and 3D-printing may lead to unmanned killer robots becoming cheap and widely available, according to (pdf) the Center for New American Security’s (CNAS) 20YY report on robotic war.
Somewhere between 75 and 87 countries currently have unmanned aircraft in their militaries, according to Foreign Policy. Twenty-six of those countries deploy larger systems equivalent to the U.S .Predator drone.
It’s not a huge leap to imagine those unmanned systems being equipped with a targeting system and a decision algorithm, then sent off to war.
CNAS argues (pdf) that we could be seeing a military-technical revolution, similar to gunpowder or the atomic bomb, and that it could topple the U.S. military’s current high-tech dominance.
Others have argued that an arms race is unlikely to happen, as these systems only provide a limited advantage over manned systems. Such voices may prove to be naive however, as AI, machine learning, and neural networks rapidly mature and become more powerful and less expensive.
The ethics of autonomous weapons
Still, the rapid development of weaponized robotics has caused concerns.
The United Nation’s Human Rights Council has already called for a moratorium (pdf) in a paper on the development of lethal autonomous robots.
In the paper, it raises the concern that such machines would not be adequately able to comply with international human rights law.
It also stated that there’s no proper way of determining legal accountability (if killer robots kill a civilian, who’s responsible?), and that “… robots should not have the power of life and death over human beings.”
At a recent Chatham House talk on the subject, similar concerns were raised: risks of civilian casualties may be high because it’s difficult to program a robot to make subjective decisions on the proportion of force to be used.
For instance, a human will have few problems determining whether to use a handgun or a rocket launcher to take out an enemy in a crowded area. A robot would have to be specifically programmed to understand this, however.
The possibility of strategic instability was also raised, where the interaction between two nation’s autonomous weapons might inadvertently cause war.
In 2016 two prominent arms control groups published a report calling for human control of autonomous battle systems.
Is the rise of the robot armies inevitable?
As experts on the topic are quick to point out, this isn’t Terminator. There won’t be all-robot armies any type soon – and they probably won’t look like Austrian bodybuilders, or in fact any kind of humanoid.
However, for tasks where machines excel over humans, there are clear use case scenarios.
A good example is so-called “dull, dirty, and dangerous” tasks such as, say, fighting a decade-long insurgency in an unconquerable desert. These tasks make people miserable and therefore bad soldiers, but are perfect for ever-patient, ever-vigilant robots.
There’s also the obvious benefit that more robotic forces would mean less war casualties, and allow a smaller army to do more damage (a huge U.S. priority, as we’ve noted).
As for inevitability, well: even if these weapons were restricted or even banned by the international community, they would likely be developed anyway, by the states that respect the law the least.
And even if a ban were put in place, it would be really hard to enforce, because the autonomy would be determined by the software, not the hardware..