photo by UK Ministry of Defense via Flickr
Semi-autonomous missiles capable of deciding who and how to attack are already in the hands of militaries around the world, and continue to make strides in sophistication.
‘Killer robots,’ as they’ve been coined, are on the table for discussion once again following a UN meeting about the moral and ethical implications of the world’s newest class of increasingly autonomous missiles.
Using a combination of artificial intelligence software and advanced communication systems, these cutting edge missiles are capable of seeking out and destroying targets almost entirely independent of human directive.
Despite their advancement, many are cautioning against their implementation. But just how dangerous are these new wave of robot weapons?
How do they work?
Among the countries who are developing autonomous missiles are the United States, China, Israel, Russia, and the United Kingdom, and though these weapons aren’t completely independent of human control just yet, the technology has come a long way.
One of the most technologically advanced new weapons so far is the experimental Long Range Anti Ship Missile (LRASM) which can be deployed from either sea or air.
The LRASM works as follows:
- First the missile is deployed and guided by pilots
- As the missile grows nearer to its target it switches into autonomous mode
- Using its onboard sensors, it identifies its programmed target and calculates a plan of attack and impact
The LRASM missile has already been developed and tested with great success.
Among the key driving points behind the development of weapons like the LRASM is their capability to skirt enemy defense systems by using its artificial intelligence to avoid radar.
A few semi-autonomous missiles which have already been developed are:
- The U.K.’s brimstone missiles
As detailed by the New York Times, these missiles can hunt for specific targets like tanks, buses, or cars, all without the oversight of human operators
- Israel’s Harpy missile
This missile hovers in the sky scouting for enemy radar. Upon indentifying the target, this missile calculates an impact route and attacks without human prompting.
The advancement of AI technology in regard to weapons systems has accrued its fair share of critics. Common concerns amongst naysayers include:
Their accordance with human rights law
Without human oversight, critics fear that AI weapons may be more apt to disobey human rights law by failing to discern between civilian and military targets.
Lack of international guidelines
Since the arena of automated and semi-automated weapons is so new, there is little in the way of rules of engagement. Naysayers see this as potential for chaos.
Without the guidance of human commanders automated missiles, may lack accountability if a situation goes awry.
While the debate over the ethics and potential pitfalls of autonomous missiles continues, there has been little to no progress in halting their development–even despite some UN experts calling for a moratorium.
As it stands, without any clear and mandatory guidelines in place, the advancement of automated weapons will remain on autopilot.