AustLII Home | Databases | WorldLII | Search | Feedback

Journal of Law, Information and Science

Journal of Law, Information and Science (JLIS)
You are here:  AustLII >> Databases >> Journal of Law, Information and Science >> 2012 >> [2012] JlLawInfoSci 8

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Sharkey, Noel --- "Automating Warfare: Lessons Learned from the Drones" [2012] JlLawInfoSci 8; (2012) 21(2) Journal of Law, Information and Science 140


Automating Warfare: Lessons Learned from the Drones

COMMENT BY NOEL SHARKEY[*]

Abstract

War fighting is currently undergoing a revolution. The use of robotics platforms for carrying weapons is coming on track at an increasing rate. US plans from all of the armed forces indicate a massive build up of military robots and at least 50 other countries have either bought them or have military robotics programmes.[1] Currently all armed robots in the theatre of war are remotely controlled by humans; so called man-in-the-loop systems. Humans are responsible for both target selection and decisions about lethal force. But this is set to change. The role of the person in the loop will shrink and eventually vanish. But are we ready for this step? Do we understand the limits of the technology and how massive increases in the pace of battle will leave human responsibility in the dark? Before moving to autonomous operation we need to consider the lessons learned from the application of the current remotely piloted armed robots. Four areas are considered here: (i) moral disengagement; (ii) targeted killings in covert operations; (iii) expansion of the battle space; (iv) the illusion of accuracy.

Introduction

Since 2004, all of the Roadmaps and plans of the US forces have discussed the requirements for the development and deployment of autonomous battlefield robots.[2] The UK Ministry of Defence Joint Doctrine Note[3] follows suit. Fulfilment of these plans to take humans out of the loop is well underway. There will be a staged progression towards autonomous operation; first for flight (take-off, navigation, obstacle avoidance etc) then for target selection. The end goal is that robots will operate autonomously to locate their own targets and destroy them without human intervention.[4]

The term autonomy can be very confusing for those not working in robotics. It has the flavour of robots thinking for themselves. But this is just part of the cultural myth of robotics created by science fiction. Autonomy in robotics is more related to the term automatic than it is to individual freedom. An automatic robot carries out a pre-programmed sequence of operations or moves in a structured environment. A good example is a robot arm painting a car.

An autonomous robot is similar to an automatic machine except that it operates in open or unstructured environments. The robot is still controlled by a program but now receives information from its sensors that enable it to adjust the speed and direction of its motors (and actuators) as specified by the program. For example, an autonomous robot may be programmed to avoid obstacles in its path. When the sensors detect an object, the program simply adjusts the motors so that the robot moves to avoid it; if the left hand sensors detect the object the robot would move right and if the right hand sensors detect the object, the robot would move left.

Even those who should know better can confuse the issue, for example the UK Ministry of Defence (MoD) Joint Doctrine Note begins its definition of autonomy as follows: ‘An autonomous system is capable of understanding higher level intent and direction.’[5] The problem with this statement is that, apart from metaphorical use, no system is capable of ‘understanding’ never mind ‘understanding higher level intent’. This would mean that autonomous robots may not be possible in the foreseeable future. This is not just pickiness about language on my part. Correctly defining what is meant by ‘autonomous’ has very important consequences for the way that the military, policy makers and manufacturers think about the development of military robots.

This confusion also shows up later in the MoD document in a discussion about artificial intelligence (AI): ‘Estimates of when artificial intelligence will be achieved (as opposed to complex and clever automated systems) vary, but the consensus seems to lie between more than 5 years and less than 15 years, with some outliers far later than this.’ But it is ludicrous to say that ‘artificial intelligence will be achieved’. AI is a field of inquiry that began in the 1950s and is a term used to describe work in that field, so an AI program is a program that uses artificial intelligence methods. In that sense, it was achieved more than 50 years ago. Perhaps what the MoD is trying to suggest by this statement is that AI programs will become as intelligent as humans or more so within this timeframe. If that is the case, then where does the consensus of 5 to 15 years come from unless it is the consensus from a few outlier scientists?

In Chapter 6 of the Joint Doctrine Note, it states that, ‘True artificial intelligence, whereby a machine has a similar or greater capacity to think like a human will undoubtedly be a complete game changer, not only in the military environment, but in all aspects of modern life’.[6] The Note continues, ‘The development of artificial intelligence is uncertain and unlikely before the next 2 epochs.’ However, there is no way of knowing how long an epoch is and one cannot help but wonder how this relates to the 5 to 15 years mentioned earlier.

It is worth repeating here that autonomy is not about thinking robots. This is particularly important when it comes to discussions about robots making life and death decisions. The often-misunderstood robot decision process should not be confused with human decision making except by weak analogy. A computer decision process can be as simple as, IF object on left, THEN turn right OR IF object on right, THEN turn left, ELSE continue. Alternatively, the activity on a sensor may activate a different sub-program to help with the decision. For example, to get smoother passage through a field laden with objects, a sub-program could be called in to calculate if a turn to the left would result in having to negotiate more obstacles than a turn to the right.

Programs can become complex through the management of several sub-programs by processes set up to make decisions about which sub-program should be initiated in particular circumstances. But the bottom line for decision making by machine, whether is it using mathematical decision spaces or AI reasoning programs, is the humble IF/THEN statement.

Another misunderstanding about autonomy is that it is all or nothing. A system does not have to be exclusively autonomous or exclusively remote operated. There is a continuum from fully controlled to fully autonomous and different groups slice the pie differently. The US Army, Navy and Air Force all discuss the classification of military robots on a continuum from totally human operated to fully autonomous.[7] Each has separate development programmes and each has its own operational definitions of the different levels of robot autonomy. The Army has ten levels while the Air Force has four. The Navy characterises autonomy in terms of mission complexity but points to three different classes of autonomous robot vehicle: (i) scripted; (ii) supervised; and (iii) intelligent. The US National Institute of Standards and Technology (NIST) has been attempting to develop a generic framework for describing levels of autonomy for some time.[8]

Despite the gloss and discussion about what exactly constitutes each level of autonomy, there is an inexorable move toward the development of autonomous systems that carry weapons. It is perhaps said too often that for the time being there will be a person somewhere in the loop. But the role of that person is seen as shrinking to vanishingly small: ‘humans will no longer be “in the loop” but rather “on the loop” – monitoring the execution of certain decisions. Simultaneously, advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.’[9] So essentially a person will be on the loop to send in the swarm and possibly call them off if there is radio or satellite contact.

What type of autonomy this is, what it’s called and what level is ascribed to it seems largely irrelevant to the overarching goal of automating warfare. Autonomous systems that can select targets and possibly kill them are likely to pose a number of ethical and legal problems as I have pointed out elsewhere.[10]

In brief, the main ethical problem is that no autonomous robots or artificial intelligence systems are able to discriminate between combatants and non-combatants. Allowing them to make decisions about who to kill would fall foul of the fundamental ethical precepts of the laws of war under jus in bello and the various protocols set up to protect civilians, wounded soldiers, the sick, the mentally ill, and captives. There are no visual or sensing systems up to the challenge of competently making such decisions. A computer can compute any given procedure that can be written down in a programming language. We could, for example, give the computer on a robot an instruction such as, ‘if civilian, do not shoot’. This would be fine if and only if there was some way to give the computer a clear definition of what a civilian is. The laws of war certainly do not offer a definition which can be used to provide a machine with the necessary information. The 1944 Geneva Convention requires the use of common sense while the 1977 Protocol 1 essentially defines a civilian in the negative sense as someone who is not a combatant.

However, even if there was a clear computational definition of a civilian, robots do not have the sensing capabilities to differentiate between civilians and combatants. Current sensing apparatus and processing can just about tell us that something resembles a human, but little else. Moreover, it is not always appropriate to kill all enemy combatants. Both discrimination and appropriateness require reasoning. There are no AI systems that could be used for such real world inferences.

There is also the principle of proportionality and again there is no sensing or computational capability that would allow a robot to be able to make such a determination, nor is there any known metric to objectively measure needless, superfluous or disproportionate suffering.[11] This requires human judgment. Yes, humans do make errors and can behave unethically but they can be held accountable. Who is to be held accountable for the lethal mishaps of a robot? Certainly not the machine itself. There is a long causal chain associated with robots: the manufacturer, the programmer, the designer, the department of defence, the generals or admirals in charge of the operation and the operator.

Despite these ethical and legal problems there is an inexorable drive towards the development of autonomous systems. As early as 2005, the Committee on Autonomous Vehicles in Support of Naval Operations wrote,

The Navy and Marine Corps should aggressively exploit the considerable warfighting benefits offered by autonomous vehicles (AVs) by acquiring operational experience with current systems and using lessons learned from that experience to develop future AV technologies, operational requirements, and systems concepts. [12]

1 Lessons Learned from the Drone Wars

Unfortunately, the lessons alluded to in the paragraph above are really about the weaknesses of remote piloted robots and how military advantage can be increased by getting closer to autonomy. These include: (i) remote operated systems are more expensive to manufacture and require many support personnel to run them; (ii) it is possible to jam either the satellite or radio link or take control of the system; (iii) one of the military goals is to use robots as force multipliers so that one human can be a nexus for initiating a large scale robot attack from the ground and the air; (iv) the delay time in remote piloting a craft via satellite (approximately 1.5 seconds) means that it could not be used for interactive combat with another aircraft. At a press briefing in December 2007, Dyke Weatherington, deputy director of the US DoD’s Unmanned Aerial Systems Task Force, said,

Certainly the roadmap projects an increasing level of autonomy ...to fulfill many of the most stressing requirements. Let me just pick one for example. Air-to-air combat — there’s really no way that a system that’s remotely controlled can effectively operate in an offensive or defensive air combat environment. That has to be — the requirement of that is a fully autonomous system.[13]

We can only hope that the ‘lessons learned’ will include ethical and international humanitarian law (IHL) issues. Otherwise the use of autonomous robots will amplify and extend the ethical problems already being encountered. In the following we examine four of the areas where ethical and legal lessons should be learned before there are moves to autonomous operation.

1.1 Moral disengagement

Remote pilots of armed attack planes like the Reaper MQ-9 and the Predator MQ-1 have no need to worry about their personal safety. Sitting in cubicles thousands of miles away from the action they can give a mission their full attention without worrying about being shot at. They are on highly secure home ground where no pilot has ever been safer. It can be argued that this alleviates two of the fundamental obstacles that warfighters must face:[14] fear of being killed[15] and resistance to killing.[16] This does not create legal problems for the use of remotely piloted aircraft (RPA) any more than the use of any other distance weapons such as artillery or missiles. But what about the moral problems? Royakkers and van Est argue that sitting in cubicles controlling planes from several thousand miles from the battlefield encourages a ‘Playstation’ mentality. The so called ‘cubicle warriors’, are both emotionally and morally disengaged from the consequences of their actions. Royakkers and van Est suggest that new recruits may have been playing videogames for many years and may not see a huge contrast between playing video games and being a cubicle warrior.[17] They provide examples from Peter Singer’s book Wired for War[18] in which young cubicle warriors are reported saying how easy it is to kill.

The counter argument is that because remote pilots often get to see the aftermath of their actions on high-resolution monitors, they are less morally disengaged than the crews of high altitude bombers or fighter pilots. It is also argued that remote pilots undergo a new kind of stress caused by going home to their families in the evening after a day on the battlefields of Afghanistan. There is currently no scientific research on this issue and no way to resolve the arguments.

In an interview for the Air Force Times,[19] Col Chris Chambliss, a commander of the 432nd Wing at Creech, said that on only four or five occasions had sensor operators gone to see the Chaplain or their supervisors and that this was a very small proportion of the total number of remote operators. Other pilots interviewed said that they had not been particularly troubled by their missions although they could sometimes make for a strange existence.

But the legal issue here is not whether there is a new kind of stress for remote combatants or whether they are morally buffered by distance. The legal issues revolve around whether or not the new stresses or moral disengagement impacts on targeting decisions. Are remote pilots more careless about taking the lives of those who should be immune from attack than other military forces? Currently, targeting decisions for conventional forces are not the sole responsibility of the pilots themselves; there is a chain of command where decisions about lethal targeting involves others such as a commander and a legal representative from the Judge Advocate General’s office. Matt Martin, a veteran drone pilot who served in both Iraq and Afghanistan, tells of many of the frustrations of dealing with commanders and lawyers taking time over decisions while he watched legitimate targets escape.[20]

The big worry is that even if this chain of command for the appropriate application of lethal force works now, it is difficult to see how the same control could be carried forward as the number of armed robots dramatically increases. There are barely enough pilots and sensor operators now never mind commanders and lawyers. The lesson is that the number of remote piloted operations should not exceed a chain of command capable of supporting a tightly controlled ethical and legal decision structure.

For fully autonomous armed systems, by definition, all of the checking for the legality of targets and intelligence would have to be carried out prior to launching the systems. There is a strong lesson to be learned from current RPA use. Since so many mishaps already occur with humans firmly in the loop, it is unlikely that autonomous lethal targeting, with its lack of discriminative apparatus and human decision-making (as discussed in the introduction), will meet legal requirements.

1.2 Targeted killings in covert operations

The second lesson to be learned concerns the covert use of RPA by the intelligence services. The CIA effectively now has an armed remote piloted ‘Air Force’ controlled, possibly by civilian contractors, from Langley in Virginia, USA. The CIA were the first in the US to use armed drones when in 2002 they killed five men travelling in a Sport Utility Vehicle in Yemen.[21] Department of Defense lawyers considered this to be a legitimate defensive pre-emptive strike against al-Qaeda. Since then, the use of drones for targeted killings or ‘decapitation strikes’ in states that are not at war with the US has become commonplace. The Asia Times has called the CIA drone strikes ‘the most public “secret” war of modern times’.[22]

Estimates of the number of drone strikes in Pakistan have been published on the websites of both the New America Foundation[23] and The Brookings Institute[24] and are shown in Table 1. The number of civilian deaths has been very difficult to estimate and has ranged from as few as 20 to more than a thousand.

Table 1: High and low estimates of drone strike deaths in Pakistan 2004-2011

NUMBER OF DRONE STRIKES
ESTIMATES OF DRONE KILLS
high
low
leaders
YEARS
NMA
BI
NMA
BI
NMA
BI
NMA
2004-07
9
9
109
112
86
89
3
2008
34
35
296
313
263
273
11
2009
53
53
709
724
413
368
7
2010
118
117
993
993
607
607
12
2011*
31
21
199
177
138
122
1

*Up until 27 May 2011. NMA=New Foundation of America; BI=Brookings Institute

Are these targeted killings legal under international humanitarian law? Their legality is at best questionable. ‘Decapitation’ is used to mean cutting off the leaders of an organisation or nation fighting a war from the body of their warfighters. The stated goal of the aerial decapitation strikes was to target al-Qaeda and Taliban leaders without risk to US military personnel. Eventually, so the story goes, this would leave only replacement leaders from the shallowest end of the talent pool and so render the insurgents ineffective and easy to defeat. However, if this was the genuine goal, it is not working well. The Table shows clearly that the proportion of leaders killed (estimates of which are provided by the New America Foundation only) to others is extremely low even by the low estimates of both the New American Foundation and the Brookings Institute with the figures showing that less than 1 in 50 killed were leaders.

These individually targeted killings are taking place despite the banning in the US of all politically motivated killing of individuals since the famous Church Commission report on CIA political assassinations in 1975. In 1976, President Ford issued a presidential executive order that ‘no person employed by or acting on behalf of the United States Government shall engage in, or conspire to engage in, assassination.’ This became Executive Order (EO) 12333 under the Reagan administration and all subsequent presidents have kept it on the books. The pro-decapitation argument is that EO 12333 does not limit lawful self-defense options against legitimate threats to the national security of US citizens.[25] During wartime, a combatant is considered to be a legitimate target at all times. If a selected individual is sought out and killed it is not termed an assassination. According to a Memorandum on EO 12333, which is said to be consistent with Article 51 of the Charter of the United Nations (the ‘Charter’),

a decision by the President to employ clandestine, low-visibility, or overt military force would not constitute assassination if US military forces were employed against the combatant forces of another nation, a guerrilla force, or a terrorist or other organization whose actions pose a threat to the security of the United States.[26]

But an insurgent war with no state actors involved complicates the picture. The legal question is now, do the US intelligence services have a right to assassinate alleged insurgent combatants without due process? Seymour Hersh, whose writings were one of the main motivations for the Church Commission, complained that, ‘the targeting and killing of individual Al-Qaeda members without juridical process has come to be seen within the Bush Administration as justifiable military action in a new kind of war, involving international terrorist organizations and unstable states’.[27] The insurgents have been redefined as combatants, but without receiving the rights of prisoners of war (because they do not wear uniforms) and without being given the chance to surrender or to face trial. This move, in combination with an appeal to Article 51[28] has been used to provide legal cover for the right to assassinate insurgent combatants.

Philip Alston, UN Special Rapporteur on extrajudicial killings, challenged the legality of the targeted killings at a UN General Assembly meeting in October 2009. A request was issued for the US to provide legal justification for the CIA’s targeting and killing of suspects and further asked who was accountable. The US refused to comment on what they said were covert operations and a matter of national security.

US Department of State legal advisor Harold Koh rebutted Alston indirectly, stating that ‘US targeting practices including lethal operations conducted by UAVs comply with all applicable law including the laws of war.’[29] However, there are no independent means of determining how the targeting decisions are being made. It remains unclear as to what type and level of evidence is being used to reach conclusions that effectively amount to death sentences by Hellfire for non-state actors without right to appeal or right to surrender. It is also unclear as to what other methods, if any, were exhausted or attempted to bring the suspects to justice. The whole process is taking place behind a convenient cloak of national secrecy.

US law Professor Kenneth Anderson also questioned the CIA’s use of drones in a prepared statement to a US Senate hearing:

[Koh] nowhere mentions the CIA by name in his defense of drone operations. It is, of course, what is plainly intended when speaking of self-defense separate from armed conflict. One understands the hesitation of senior lawyers to name the CIA’s use of drones as lawful when the official position of the US government, despite everything, is still not to confirm or deny the CIA’s operations.[30]

However, the former Director of the CIA, Leon Panetta has been more vocal about the operations. In 2008, he told the Pacific Council on International Policy that, ‘it’s the only game in town in terms of confronting and trying to disrupt the al-Qaeda leadership.’[31] Revealing the CIA’s intentions on the expansion of targeted drone kills, Panetta went on to say of al-Qaeda that, ‘If they’re going to go to Somalia, if they’re going to go to Yemen, if they’re going to go to other countries in the Middle East, we’ve got to be there and be ready to confront them there as well. We can’t let them escape. We can’t let them find hiding places.’[32]

This proposed expansion of targeted killing is just what was concerning the UN Special Rapporteur on extrajudicial killings. A subsequent report by Alston in 2010 to the UN General Assembly,[33] discusses drone strikes as violating international and human rights law because both require transparency about the procedures and safeguards in place to ensure that killings are lawful and justified: ‘a lack of disclosure gives States a virtual and impermissible license to kill.’ Some of Alston’s arguments also revolve around the notion of ‘the right to self-defence’ and whether the drone strikes are legal under Article 51.

Given the CIA’s enthusiasm for armed drones, it is likely that they will be among the first to use autonomous drones to kill. Deep questions need to be answered about any covert and unaccountable use of such indiscriminate weapons. As it is impossible for those not directly involved to determine whether a fully autonomous drone or a remotely controlled aircraft is being used to carry out a particular mission, the onus rests with the CIA to honestly identify the nature of the weapon being used.

Thinking further ahead to the use of autonomous drones for targeted killings, imagine for a moment that autonomous drones could be even more discriminate than any living human. Given the current hit record of the CIA and the dubious legality and secrecy surrounding the accountability of targeted killings, would we really want to automate the assassination of those alleged to be working against US interests without recourse to any other legal process?

1.3 Expansion of the battle space

Attacking with remote piloted vehicles (RPV) is not much different under the Laws of War than attacking with a manned helicopter gunship or even with artillery. The worry is that the nature of an unmanned vehicle with no risk to military personnel, an ability to hover over an area for very many hours, and its perceived accuracy is leading to considerable expansion of potential targets. RPVs are seen as the best weapons system for fighting combatants in a city environment where it is either too risky, inappropriate or unacceptable to employ ground forces. The Libyan uprising in 2011 provides an example of the latter situation, where predators were deployed to protect civilians. It is too early to tell whether this was simply PR or was part of a much larger agenda.

Another example of expansion of the battle space is the use of RPA to conduct covert ‘targeted killings’ by the CIA in countries that are not at war with the US as mentioned in the previous section — eg Yemen, Somalia and Pakistan. It would not be acceptable to bomb these countries from high altitude or to attack them with helicopter gunships. For example, Pakistan’s reaction to the killing of bin-Laden by Navy Seals contrasts markedly to their less strident reaction to targeted killings carried out by drones on their territory. This reveals that it is somehow more palatable to use unmanned systems (that are touted as having a high degree of accuracy) in built up areas.

Panetta, amongst others, has argued that armed UAVs are more accurate and will kill less civilians than a B-52 bomber when attacking the tribal regions in Pakistan. But as a former CIA operative told me, there is no way that Pakistan or other state actors not at war with the US could ‘turn a blind eye’ to the bomber strikes as they do now for drones. It can be argued that it is their perceived precision and accuracy that allows them to penetrate areas and kill people in ways that would not previously have been available without major political and legal obstacles.

The battlefield is also being expanded by the persistence of drones. The Predator and Reaper can fly for up to approximately 26 hours while the unarmed Global Hawk holds the flight endurance record of 33.1 hours. More recently, Qinetiq developed the Zepher, a much smaller and less powerful solar drone, which has stayed aloft for 336 hours, 22 minutes and 8 seconds. Qinetiq have now teamed up with Boeing as part of the Defense Advanced Research Projects Agency’s (DARPA) Vulture project which is aiming to achieve uninterrupted flight for a period of five years using a heavier-than-air platform.

What this means is that autonomous drones could be left over any area that might possibly have military or defensive interests for very extended periods of time. This makes it possible to maintain armed vigilance with little cost and possibly little evidence of risk to the country employing the drones. Autonomous robots will only lead to greater expansion of the battle space.

1.4 The illusion of accuracy

Both the US Predator and Reaper RPA are equipped with high-resolution cameras that provide visualisation for remote pilots, their commanders, the legal team and other commanders on the ground near the action. However, if the estimated numbers of civilian deaths resulting from drone attacks is to be believed, accuracy is an illusion. It is easy to mistake targets from the air. For example, on 23 June 2009, as many as 60 people attending the funeral of a Taliban fighter were killed in South Waziristan when CIA drones struck.[34] In February 2010, a US military drone was involved in an attack in Oruzgan (Afghanistan) in which 23 innocents, including women and children, were killed. The civilians, travelling by convoy, had been misidentified as insurgents.[35]

Another reason why the accuracy is not as good as it says on the tin is that many of the strikes are conducted on buildings or at night where the inhabitants are not visible except as temperature signatures picked up by infrared sensors.[36] In these instances, unreliable ground intelligence is often responsible for mishaps. In a recent example (April, 2011) where infrared imaging was used, two images were seen moving towards coalition troops. A drone strike was initiated that took the lives of what turned out to be a US marine staff sergeant and a Navy Seaman on their way to reinforce the troops.

One of the oft-cited targeting methods of the CIA is to locate people through their cell phones; switch on your phone and you receive a hellfire missile delivered from a drone. But a recent copyright lawsuit between two companies sheds doubt on the accuracy of this targeting method.[37] A small company called Intelligent Integration Systems alleges that one of its client companies, Netezza, reverse engineered their software, Geospatial, on a tight deadline for the CIA. The court heard that the illegal version of the software could produce locations that were out by as much as 40 feet and that the CIA had knowingly accepted the software.

But even if targeting was 100% accurate, how can we be sure that alleged insurgents are ‘guilty as charged’? Information about target identity and their role and position is heavily dependent on the reliability of the intelligence on which it is based. There are lessons that should have been learned from the Vietnam War investigations of Operation Phoenix in which thousands were assassinated. It turned out that many of those on the assassination list had been put there by South Vietnamese officials for personal reasons such as erasing gambling debts or resolving family quarrels. This was one of the main reasons why the findings of the Church report resulted in Presidential Executive Order 12333.

Things do not seem to have changed greatly since then. Philip Alston reports that during a mission to Afghanistan he found out how hard it was for forces on the ground to obtain accurate information. ‘Testimony from witnesses and victims’ family members showed that international forces were often too uninformed of local practices, or too credulous in interpreting information, to be able to arrive at a reliable understanding of a situation.’[38] He suggests that, ‘States must, therefore, ensure that they have in place the procedural safeguards necessary to ensure that intelligence on which targeting decisions are made is accurate and verifiable.’[39]

It could be argued that if the precision and visualisation afforded by RPA is so much greater than that afforded to conventional fighter pilots or high altitude bombers, then remote pilots or their commanders should be more accountable for civilian casualties. In fact Human Rights Watch made that case in their report, Precisely Wrong in 2009 about six Israeli drone strikes in Gaza that resulted in 26 civilian deaths including 8 children.[40]

Whichever way you look at it, there is absolutely no reason to believe that there will be greater accuracy if we make targeting autonomous and every reason to believe that it will be considerably worse. Given the number of civilian deaths that occur on a regular basis from drone strikes with humans watching high resolution monitors, why would we believe that machines could do it as well or better without humans? To do so is to exhibit an unrealistic blind faith in automation.

Conclusion

We started out by discussing some of the limitations of autonomous technology in terms of sensing, discrimination, reasoning and calculating proportionality. These suggest that it is premature to initiate the deployment of autonomous target selection and to automate the application of lethal force. The suggested military advantages of autonomous over remotely controlled robots make them appear considerably superior. However, the four lessons (and there are many more) from the current use of drones in terms of moral disengagement, targeted killings, the expansion of the battle space and the illusion of accuracy suggest that this could be at the cost of sacrificing or stretching international humanitarian law. Moreover, the military advantages may be very short term. The technology is proliferating rapidly with more than 50 countries already having access to military robotics and it may not be long before we see fast paced warfare with autonomous craft loaded with multiple weapons systems.

In such circumstances of military necessity will countries disadvantage the pace of battle by having a human in the loop to make decisions about who to kill? There has been no international discussion about these issues and no discussion about arms control. Yet no one knows how all the complex algorithms working at faster than human speed will interact and what devastation it may cause.


[*] University of Sheffield, UK.

[1] I have personally read valid robotics reports for each of the following countries and there may be several more: Australia, Austria, Brazil, Bulgaria, Canada, Chile, China, Columbia, Croatia, Czech Republic, Ecuador, Finland, France, Germany, Greece, Hungary, India, Indonesia, Iran, Israel, Italy, Japan, Jordan, Lebanon, Malaysia, Mexico, Netherlands, New Zealand, Norway, Pakistan, Peru, Philippines, Poland, Romania, Russia, Serbia, Singapore, South Africa, South Korea, Spain, Sweden, Switzerland, Thailand, Taiwan, Tunisia, Turkey, United Arab Emirates, United Kingdom, USA, Vietnam.

[2] US Department of the Navy, The Navy Unmanned Undersea Vehicle (UUV) Master Plan (9 November 2004); US Department of Defense, Office of the Secretary of Defence, Unmanned Aircraft Systems Roadmap 2005-2030 (2005); US Office of the Undersecretary of Defense, Joint Robotics Program Master Plan FY 2005, LSD (AT&L) Defense Systems/Land Warfare and Munitions; US Department of Defense, Office of the Secretary of Defense, Unmanned Systems Roadmap 2007-2032 (2007); United States Air Force, Unmanned Aircraft Systems Flight Plan 2009-2047 (18 May 2009).

[3] United Kingdom Ministry of Defence (MoD), Joint Doctrine Note 2/11, The UK Approach to Unmanned Aircraft Systems (30 March 2011) (‘Joint Doctrine Note’).

[4] Noel Sharkey, ‘Cassandra or the False Prophet of Doom: AI Robots and War’ (2008) 23(4) IEEE Intelligent Systems 14.

[5] MoD, above n 3, 2-3.

[6] Ibid 6-12.

[7] US Office of the Undersecretary of Defense, above n 2.

[8] H Huang, J Albus, E Messina, R Wade and W English, ‘Specifying autonomy levels for unmanned systems: Interim report’ (SPIE Defense and Security Symposium, Orlando, Florida, 2004).

[9] United States Air Force, above n 2, 41.

[10] Noel Sharkey, ‘Automated Killers and the Computer Profession’ (2007) 40(11) IEEE Computer 124; Noel Sharkey, ‘Grounds for Discrimination: Autonomous Robot Weapons’ (2008) 11(2) RUSI Defence Systems 86; Noel Sharkey, ‘The Ethical Frontiers of Robotics’ (2008) 322(5909) Science 1800; Noel Sharkey, ‘Weapons of Indiscriminate Lethality’ (2009) 1/09 FIfF Kommunikation 26.

[11] Noel Sharkey, ‘Death Strikes from the Sky: The Calculus of Proportionality’ (2009) 28 IEEE Science and Society 16.

[12] Committee on Autonomous Vehicles in Support of Naval Operations National Research Council, Autonomous Vehicles in Support of Naval Operations (National Academies Press, 2005).

[13] US Department of Defense, ‘DoD Press Briefing with Mr Weatherington from the Pentagon Briefing Room’ (News Transcript, 18 December 2007) <http://www.defense.gov/transcripts/transcript.aspx?transcriptid=4108> .

[14] Noel Sharkey, ‘Saying No! to Lethal Autonomous Targeting’ (2010) 9(4) Journal of Military Ethics 369.

[15] A Daddis, ‘Understanding Fear’s Effect on Unit Effectiveness’ (July-August, 2004) Military Review 22.

[16] D Grossman, On Killing: The Psychological Cost of Learning to Kill in War and Society (Little, Brown and Co, 1995).

[17] L Royakkers, and R van Est, ‘The cubicle warrior: the marionette of digitalized warfare’ (2010) 12 Ethics and Information Technology 289.

[18] P W Singer, Wired for War: The Robotics Revolution and Conflict in the 21st Century (Penguin Press, 2009).

[19] Scott Lindlaw, ‘UAV operators suffer war stress’, Air Force Times (online), 8 August 2008

http://www.airforcetimes.com/news/2008/08/ap_remote_stress_080708>.

[20] M Martin and C W Sasser, Predator: The Remote-Control Air War over Iraq and Afghanistan (Zenith Press, 2010).

[21] Israel may have been using armed drones for longer but they denied this for several years despite eyewitness testimony. It cannot be verified here.

[22] Nick Turse, ‘Drone surge: Today, tomorrow and 2047’, Asia Times (online), 26 January 2010 <http://www.atimes.com/atimes/South_Asia/LA26Df01.html> .

[23] The Year of the Drones (31 May 2011) New America Foundation <http://counterterrorism.newamerica.net/drones> .

[24] Ian S Livingston and Michael O’Hanlon, Pakistan Index (31 May 2011) Brookings <http://www.brookings.edu/~/media/Files/Programs/FP/pakistan%20index/index.pdf> .

[25] H W Parks, Memorandum on Executive Order 12333 (Reproduction Department of the Army, office of the Judge Advocate General of the Army, 1989).

[26] Ibid.

[27] S M Hersh, ‘Manhunt: The Bush administration’s new strategy in the war against terrorism’, New Yorker, 23 December 2002, 66.

[28] Article 51 of the United Nations (UN) Charter reads: ‘Nothing in the present Charter shall impair the inherent right of individual or collective self-defence if an armed attack occurs against a Member of the United Nations, until the Security Council has taken measures necessary to maintain international peace and security. Measures taken by Members in the exercise of this right of self-defence shall be immediately reported to the Security Council and shall not in any way affect the authority and responsibility of the Security Council under the present Charter to take at any time such action as it deems necessary in order to maintain or restore international peace and security.

[29] Harold Koh, ‘The Obama Administration and International Law’ (Speech delivered at the Annual Meeting of the American Society of International Law, Washington DC, 25 March 2010).

[30] Kenneth Anderson, Submission to US House of Representatives Committee on Oversight and Government Reform Subcommittee on National Security and Foreign Affairs, Subcommittee Hearing, Drones II, 28 April 2010, [20].

[31] Leon Panetta, Director’s Remarks at the Pacific Council on International Policy (18 May 2009) Central Intelligence Agency <https://www.cia.gov/news-information/speeches-testimony/directors-remarks-at-pacific-council.html>.

[32] Ibid.

[33] Philip Alston, Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Addendum, Study on Targeted Killings, (28 May 2010) UN Doc A/HRC/14/24/Add.6.

[34] Pir Aubair Shah and Salman Masood, New York Times (online), 23 June 2009 <http://www.nytimes.com/2009/06/24/world/asia/24pstan.html> .

[35] Karin Brulliard, The Washington Post (online), 30 May 2010 <http://www.washingtonpost.com/wp-dyn/content/article/2010/05/29/AR2010052901390.html> .

[36] Noel Sharkey, ‘A matter of precision’ (2009) (December) Defence Management Journal 126.

[37] Jeff Stein, ‘CIA drones could be grounded by software suit’, Washington Post (online), 11 October 2010, SpyTalk

<http://voices.washingtonpost.com/spy-talk/2010/10/cia_drones_could_be_grounded_b.html> .

[38] Alston, above n 33.

[39] Ibid.

[40] Human Rights Watch, Precisely Wrong: Gaza Civilians Killed by Israeli Drone-Launched Missiles (June, 2009)

<http://www.hrw.org/en/reports/2009/06/30/precisely-wrong-0> .


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/JlLawInfoSci/2012/8.html