The risk from robot weapons


One of the greatest risks of the next few decades is likely to come from robotic weapons, AI (artificial intelligence) weapons, or lethal autonomous weapons (LAWs).

In August 2017 as many as 116 specialists from 26 countries, including some of the world’s leading robotics and artificial intelligence pioneers, called on the United Nations to ban the development and use of killer robots.

They even said that this arms race threatens to usher in the ‘third revolution’ in warfare after gunpowder and nuclear arms. They wrote, “Once developed lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent population, and weapons hacked to behave in undesirable ways.” “We do not have long to act,” this letter warned.

“Once this Pandora’s box is opened, it will be hard to close.” Ryan Gariepy, the founder of Clearpath Robotics, has said, “Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapon systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”

The Economist (27 January 2016) noted in its special report titled ‘The Future of War’, “At least the world knows what is like to live in the shadow of nuclear weapons. There are much bigger question marks over how the rapid advances in artificial intelligence (AI) and deep learning will affect the way wars are fought, and perhaps even the way people think of war. The big concern is that these technologies may create autonomous weapon systems that can make choices about killing humans independently of those who created or deployed them.”

This special report distinguished between three types of AI weapons or robot weapons (i) in the loop (with a human constantly monitoring the operation and remaining in charge of critical decisions, (ii) on the loop (with a human supervising machines that can intervene at any stage of the mission) or (iii) out of the loop (with the machine carrying out the mission without any human intervention once launched).

Fully autonomous robot weapons (third category) are obviously the most dangerous. A letter warning against the coming race of these weapons was signed in 2015 by over 1,000 AI experts. An international campaign called ‘Campaign to Stop Killer Robots’ is working on a regular basis for this and related objectives. Elon Musk has pinpointed competition for AI superiority at national level as the “most likely cause of World War 3.”

A recent widely discussed review of new weapons said, “The possibilities of killer robots can no longer be dismissed. Stephen Hawking, Elon Musk, Bill Gates and many other experts believe that, handled badly, general AI could be an existential threat to the human race” (A general AI machine would be able to carry out almost any intellectual task human are capable of).

Further, this report notes, “the biggest change in the way wars are fought will come from deploying lots of robots simultaneously.” Paul Scharre, an expert on autonomous weapons, has written that “collectively,swarms of robotic systems have the potential for even more dramatic, disruptive change to military operations.”

One possibility he mentions is that tiny 3D-printed drones can be formed into smart clouds that can permeate a building or be air-dropped over a wide area to look for hidden enemy forces. Several countries are surging ahead with rapid advances in robot weapons.

In 2014 the Pentagon announced its ‘Third Offset Strategy’ with its special emphasis on robotics, autonomous systems and ‘big data’. This is supposed to help the USA to maintain its military superiority.

In July 2017 China presented its “Next-Generation Artificial-Intelligence Development Plan”, which gives a crucial role to AI as the transformative technology in civil as well as military areas, with emphasis on ‘militarycivil fusion’.

As the arms race for AI weapons escalates, there will be a temptation all the time to actually use them to test their capabilities. Peter Singer, an expert on future warfare at ‘New America”, a think tank, has said that very powerful forces propel the AI arms race – geopolitical compulsions, scientific advances and profit-seeking high technology companies.

Scharre has also raised the possibility that perhaps because of badly written codes or perhaps because of cyber attack by an adversary, military use autonomous systems can malfunction, raising possibilities of attack on people or soldiers on the same side, or escalating conflicts or killing to unintended, highly exaggerated levels.

The Economist has written, “The fast approaching revolution in military robotics is in a different league. It poses daunting ethical, legal, policy and practical problems, potentially creating dangers of an entirely new and, some think, existential kind.”

The UN’s Convention on Certain Conventional Weapons (CCWs) has the mandate to prohibit or restrict some weapons which are recognised as causing unjustifiable suffering. Discussions about robot weapons and lethal autonomous weapons (LAWs) have been held at this forum, but clearly robot weapons cannot be regarded as conventional weapons.

The campaign called Stop Killer Robots wants a legally binding international treaty banning LAWs. This is an issue which should get the increasing support of all people who believe firmly in peace and disarmament.

The writer is a freelance journalist who has been involved with several social movements and initiatives.