Autonomous Killer Robots Are Here To Stay

autonomous killer robots

Is it time to stop the killer robots?

With the advancement in military technology, experts in machine learning are of the opinion that it would be technologically straightforward to make AI-powered systems that could make the decision to use lethal power without any human interference.

Till now, the US has used military drones in operations, but the decision making lies with the human. Called “lethal autonomous weapons” but usually labeled as killer robots, would make decisions about whom to target and kill without a human being in the loop. Things would become easier with the advancements in facial recognition and decision making algorithms.  The proposed weapons would not be a humanoid robot but would mostly be drones. It is expected that they could be built cheaper and smaller.

Militaries around the world seem interested in developing it. However, there is a massive outcry from researchers in AI and public policy. They agree that there are different ways to lessen the collateral damage and destructions of war, but they disapprove the use of fully autonomous weapons that can lead to a number of moral, technical and strategic dilemmas. They have forced the United Nations and world governments to consider a preemptive ban.

 Killer Robots

We have witnessed that the US has regularly employed drones in countries where it is engaged in military operations. Human controllers decide when and where these drones will fire. Lethal autonomous weapons will replace humans with an algorithm as decision-maker about the shooting.

“Technologically, autonomous weapons are much easier than self-driving cars,” Stuart Russell, a computer science professor at UC Berkeley and leading AI researcher, told us. “People who work in related technologies think it would be relatively easy to put together a very effective weapon in less than 2 years.”

So will it be something like the weapon in science fiction movies of Hollywood? “When people hear the term ‘killer robots,’ they think science fiction, they think Terminator, they think of something that is far away,” Toby  Walsh, a professor of AI at the University of New South Wales (Australia) and an activist against lethal autonomous weapons development, said. “Instead, it is simpler technologies that are much nearer, and that are being prototyped as we speak.”

Today’s drones transmit video feedback to a military base where a soldier decides whether the drone should fire on the target.  The simplest version of the killer robot would use existing military drone hardware, and the algorithm would decide to fire on the target.

The algorithm could have a list of highly wanted people it can target and fire only if it has identified one of those targets from its video footage. Or it could be trained, from video footage of combat, to predict whether a human would tell it to go ahead and shoot or fire if it predicts that’s the order it would be given. Or it could be guided to fire on anyone in a war zone holding a visually identifiable weapon such as gun and not wearing the uniform of friendly forces.

In the past few years, there has been a breakthrough in the state of AI. Things that would seem impossible a few years ago are now achievable through new techniques of AI systems. Whether it’s writing stories or creating fake faces or making instantaneous tactical decisions in online war games, things have gone from extremely complex to straight forward. Facial recognition has gotten much more accurate, as has object recognition, two skills that would likely be essential for lethal autonomous weapons.

But the leaps of rapid advancement of “Lethal Autonomous Weapons” has left the question of whether they are acceptable to develop or use far behind.

Why the demand for killer robots

Humans are out, the algorithm is in but why anyone would want that? Designing weapons that can target on their own has serious moral and strategic implications.

First of all, a simple argument for autonomous weapons given from a military viewpoint is that they welcome you to an avenue of new capabilities. It is difficult to imagine the numbers of drones in the sky at once if drones are individually controlled by a human who makes the critical decision about when the drone could fire.

More on, for now, the drones need to send and receive information from their base. This means some delay time that leaves them somehow exposed. What if the communications get intercepted by enemies who can block communication networks.

LAWS would change that. “Because you don’t need a human, you can launch several of thousands or millions of [autonomous weapons] even if you do not have thousands or millions of human beings to look after them,” Walsh said. “They don’t have to fear about jamming, which is probably one of the best ways to protect against human-operated drones.”

But that is not the only case being made for these autonomous weapons. To further support the argument, it is said that humans have emotions that influence their decisions.

“The most interesting argument for autonomous weapons,” Professor Walsh told us, “is that robots can be a lot more principled.” Humans, after all, sometimes commit war crimes, deliberately targeting innocents or killing those who have surrendered. And humans get stressed, fatigued, and confused, and end up making mistakes. Robots, by contrast,follow exactly their what’s in their code,” he said.

Former US Army Ranger and Pentagon defense expert, Paul Scharre discusses the idea in his book, Army of None: Autonomous Weapons and the Future of War. “Unlike human soldiers,” he argued, “machines never get angry or seek revenge.” And “it isn’t hard to imagine future weapons that could outperform humans in distinguishing between a person holding a rifle and one holding a rake.”

Eventually, Scharre goes on to point out a serious flaw in this argument: “What’s right and what’s legal aren’t always the same.” He narrated the story of a time his unit in Afghanistan was scouting, and their presence was discovered by the Afghan Taliban. The Taliban fighters sent out a 6-year-old girl, who unconvincingly pretended to be herding her goats while reporting the location of the US soldiers by radio to the Taliban.

“The laws of war don’t set age for combatants,” Scharre points out in the book. Under the laws of war, a Taliban fighter was entering in military action against the US soldiers, and it would be lawful to shoot her. However, the soldiers dismissed the idea — because killing children is wrong. But a robot set to follow the codes of war would not consider details like that. At times, soldiers have committed heinous war crimes, but at other times they do better because they have to go by the moral and legal codes. Robots won’t do the same.

The founder of Scientists Against Inhumane Weapons, Emilia Javorsky, suggested that there is a much better way to use robots to prevent war crime.  According to him, both humans and machines make different mistakes, and if they work in unison, you can avoid both kinds of mistakes. E.g., in medicine — diagnostic algorithms make one kind of mistake; doctors tend to make a different kind of error.

Thus, it is suggested that we could design weapons that are programmed to know the laws of war — and accordingly will annul any order from a human that violates those laws — and that does not have the authority to kill without human oversight. Scientists Against Inhumane Weapons and others who study LAWS have no objections to systems like those.  They strongly advocate, as a matter of international law and as a focus for weapons research and development, there should always be a human in the loop.

If this possibility is sought, then there are chances of significant advancements: robots that have automatic systems against making mistakes but also have human input to make sure the automated decisions are the right ones. But right now, analysts worry that we are moving toward full autonomy: a world where robots are deciding to kill people without human input.

Why oppose killer robots?

It is feared that fully autonomous weapons will give a license to kill people efficiently and cheaply, which is a serious problem all by itself if it falls in the wrong hands. But objectors of lethal autonomous weapons warn that the consequences can be much worse than that.

People take part in a demonstration as part of the campaign “Stop Killer Robots” organised by the German NGO “Facing Finance” to ban what they call killer robots on March 21, 2019 in Berlin. The campaign to “Stop Killer Robots” is a global coalition of 82 regional, international, and NGOs in 35 countries that asks for a ban on lethal fully autonomous weapons. (Photo by Wolfgang Kumm / AFP / Getty Images)

It is for sure; if LAWS development continues, there might come a stage when the weapons might be extremely inexpensive. At present, drones can be purchased or built by hobbyists quite cheaply, and prices are likely to go down as the technology improves. There are chances that the US drones if used, many of them would be scavenged or captured. “If you create an easily proliferated and cheap weapon of mass destruction, it will be used against Western countries,” Russell informed us.

It is likely that lethal autonomous weapons might become a reality for genocide and ethnic cleansing; According to Ariel Conn, the communications director at the Future of Life Institute,  drones can be programmed to target a specific kind of person.

There are implications for broader AI development. Currently, US machine learning and AI is ahead of the pack, and the US military is reluctant to agree that it will not exploit that advantage on the battlefield. “The United States military thinks it’s going to maintain a technical advantage over its opponents,” Walsh told us.

stop-killer-robots
The Movement to Stop Killer Robots works to persuade policymakers that lethal autonomous weapons should be banned internationally.

The experts warn that the US sense of superiority opens us up to some of the scariest possible scenarios for Artificial Intelligence. Many scientists believe that advanced artificial intelligence systems have a massive, unrealized risk for disastrous failures — going wrong in ways that humanity cannot reverse once we’ve developed them, and if things go worst then potentially wiping us out.

For the sake of preventing a grave situation, Artificial Intelligence development needs to be collaborative, open, and cautious. Scientists should not be conducting important AI research in close secret, where no one can point out their errors. If AI research is combined, collective, and shared, there is a higher probability of noticing and correcting serious problems with advanced AI designs.

And most significantly, advanced AI researchers should not be in a hurry. “We are trying to prevent an AI race,” Conn said. “No one wants this race, but just because no one wants it doesn’t mean it will not happen, and one of the things that can trigger that is a race focused on weapons.”

Given that the US has superior AI system, it may lean too much on its AI advantage for warfare; other countries will undoubtedly pace up their own military Artificial Intelligence struggles and could lead to circumstances under which AI mistakes are the most deadly and most likely.

What have researchers proposed?

Researchers are strongly advocating a ban on LAW, shedding light on the successful ban on the use of biological weapons. The ban was passed in 1972 when there were rapid advancements in bioweaponry research and growing awareness of the risks of biowarfare.

The success of the ban on biological weapons has several underlying reasons. First of all, the state actors did not have many benefits of using the tools. The case against biological weapons mainly revolved around the fact that they were unusually cheap weapons of mass destruction and access to cheap weapons of mass destruction is mostly bad for all states.

Opponents of LAWS have tried to make a case that killer robots are very similar. “My view is that it does not matter what my fundamental moral position is because that is not going to convince a government of anything,” Russell said. Instead, he has concentrated on the case that “we struggled for seventy-odd years to contain and prevent nuclear weapons from falling in the wrong hands. In large quantities, [LAWS] would be much cheaper, as lethal, much easier to proliferate” — and that is not in our anyone’s national security benefits.

The Movement to Stop Killer Robots works to persuade policymakers that lethal autonomous weapons should be banned internationally. However, the UN is dealing with the debate over a deadly autonomous weapons treaty at a snail’s pace. There are two significant, underlying factors: First, the UN’s process for international treaties is generally deliberative and a slow one, while swift technological changes are changing the strategic scenario about lethal autonomous weapons quicker than that process is set up to handle. Second, and more importantly, the treaty has some powerful opposition.

So far, the US, along with the UK, Israel, Australia, and South Korea, have shown a strong resistance to secure a United Nations treaty opposing lethal autonomous weapons. The stated reason for the United States is that since in some cases there could be philanthropic benefits to LAWS, a ban now before those benefits have been explored would be “premature.” It’s current Defense Department policy is that there will be appropriate human oversight of Artificial Intelligence systems.

Opponents, however, argue that it is better for the treaty to be signed as quickly as possible. For them, the sooner the matter is dealt, the better it is. “It’s going to be almost impossible to keep [LAWS] to narrow use cases in the military,” Javorsky maintains. “That is going to spread to use by non-state actors.” It is usually better to ban things before they are available to anyone already and wants to keep the tools they’re already using. So campaigners have worked for the past several years to bring up LAWS for debate in the United Nations, where the details of a treaty can be hammered out.

There’s a lot to shape out. What exactly makes a system independent? If South Korea installs, on the border of the Demilitarized Zone with North Korea, gun systems that automatically shoot illegal persons, that’s a lethal autonomous weapon — but it’s also similar to a land mine. “Arguably, it can be slightly better at discriminating than a minefield can, so maybe it even has advantages,” Russell told.

Or for example take “loitering munitions,” an existing technology.  Fired into the air, Scharre pens, they go around a wide area until they zoom in on the radar systems they want to destroy. No human is involved in the final decision to attack. These are autonomous weapons that target radar systems, not humans.

Such issues, along with others, need to be discussed and reached conclusions for a purposeful UN ban on autonomous weapons. And with the United States opposed, an international treaty against lethal autonomous weapons is not likely to succeed.

There’s another form of activism that might discourage military uses of AI: the averseness of AI researchers to work on such purposes. Prominent AI researchers in the US are primarily in Silicon Valley, not working for the US military, and relationships between Silicon Valley and the military have so far been tense. Google employees refused to work when it was publicized that  Google was working with the Department of Defense on drones through Project Maven, and the project was not reintroduced. Microsoft employees have similarly opposed to military uses of their work.

The tech workers can at least postpone the work when a treaty is needed or campaign to make such an agreement happen by refusing to make the software that will power the killer robots — and there are indications that they’re inclined to do so.

Is this fearful?

Killer robots can cause devastating effects. It might be just a tool in the hands of totalitarian states and nonstate actors for killing thousands of human. This sounds quite awful.

But in several ways, the situation with lethal autonomous weapons is just one demonstration of a much bigger trend.

AI is changing everything. What seemed far fetched a few years ago is now possible and that too at a pace that is not only beyond our control and thought but even the legal and robust public policy become hand-bound before the changes. As AI systems become more powerful, this change will become more and more destabilizing.

The decision making power is gradually shifting from humans to algorithms. Be it’s killer robots or fake news, algorithms used to shoot suspected combatants or trained to make parole decisions about prisoners; we’re offering over increasingly critical aspects of society to systems that aren’t fully understood and that are adjusting for goals that might not reveal our own.

Well, advanced AI systems aren’t here yet. But with the stride with which it is developing, they get closer every day, and it’s time to make sure we’ll be prepared for them. This is the right time to jot down the problems that will arise with the advancement, to formulate sound policy and international agreements before these science-fiction scenarios become a reality.

Leave a Comment

to top