AustLII Home | Databases | WorldLII | Search | Feedback

Journal of Law, Information and Science

Journal of Law, Information and Science (JLIS)
You are here:  AustLII >> Databases >> Journal of Law, Information and Science >> 2012 >> [2012] JlLawInfoSci 3

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Alston, Philip --- "Lethal Robotic Technologies: The Implications for Human Rights and International Humanitarian Law" [2012] JlLawInfoSci 3; (2012) 21(2) Journal of Law, Information and Science 35


Lethal Robotic Technologies: The Implications for Human Rights and International Humanitarian Law

COMMENT BY PHILIP ALSTON[*]

1 Introduction

Remote-controlled aerial weapons systems and their ground-based equivalents are already a commonplace. Those best known to the general public are the unmanned aerial vehicles, or drones, whose use has grown exponentially in recent years.[1] The next major development is the advent of armed, robotic weapons systems that can operate more or less autonomously. Over the past decade, the number and type of unmanned vehicle systems (UVS) developed for, and deployed in, armed conflict and law-enforcement contexts has grown at an astonishing pace. The speed, reach, capabilities and automation of robotic systems are all rapidly increasing. Unmanned technologies already in use or currently in advanced stages of development — including unmanned airplanes, helicopters, aquatic and ground vehicles[2] — can be controlled remotely to carry out a wide array of tasks: surveillance, reconnaissance, checkpoint security, neutralisation of an improvised explosive device, biological or chemical weapon sensing, removal of debris, search and rescue, street patrols, and more.[3] They can also be equipped with weapons to be used against targets or in self-defence. Some of these technologies are semi-automated, and can, for example, land, take off, fly, or patrol without human control. Robotic sentries, including towers equipped with surveillance capacity and machine guns, are in use at the borders of some countries. In the very near future, the technology will exist to create robots capable of targeting and killing either with minimal human involvement or without the need for any direct human control or authorisation.

Some of this technology is either unambiguously beneficial or can be used to clearly positive effect, including, most importantly, saving the lives of civilians by making the targeting of combatants more accurate and reducing collateral damage and limiting military personnel casualties by reducing their battleground presence and exposure. However, the rapid growth of these technologies, especially those with lethal capacities and those with decreased levels of human control, raise serious concerns that have been the subject of surprisingly little attention on the part of human rights or humanitarian actors, although some military lawyers, philosophers, ethicists and roboticists have begun to address some of the issues.[4] The general lack of international attention to this issue is understandable. Other humanitarian issues — human-induced starvation in Somalia, killing and sexual violence in the Democratic Republic of the Congo, drug and gang-related killings in Mexico, or the unfolding revolutions of the Arab Spring — seem far more immediately pressing. Moreover, the resources, time, and staffing capacities in the United Nations, non-governmental organisations and think tanks focusing on this range of issues are always stretched and their capacities to think pro-actively are accordingly limited. In addition, until such weapons are actually deployed, anything that smacks of science fiction seems more at home in the movies than in analyses of existing governmental actions or policies. The extensive attention devoted both by specialists and by the media to the challenges posed by cyberwarfare and the need to elaborate upon the rules that apply in determining the appropriate types of response to attacks on the internet and on militarily or strategically important computer hardware systems, stands in marked contrast to the lack of interest in robotic technologies.[5]

It is striking to recall that the earliest thinking done about these issues put human rights concerns at the forefront. As early as the 1930s, the science fiction writer Isaac Asimov had begun to formulate what came to be called his Three Laws of Robotics. He expressed these in the following terms in a 1942 short story:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[6]

In his later writings he added a fourth law, to which he gave primacy over the others by naming it the zeroth law:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.[7]

Despite this very early flagging of the potential for robotics to wreak havoc in what we now understand to be human rights terms, today’s human rights community has, at least until very recently, tended to view advances in robotics as an exotic topic that does not need to be addressed until the relevant technologies are actually in use. Several factors might help to account for this reticence. First, much of the information about these developments remains confined to military research establishments and specialist scientific literature. Moreover, much of it remains secret and the only information available to a researcher will be through information leaked to the media that will provide enough information to generate concern but not nearly enough to provide a foundation for further exploration. Second, understanding the technologies requires expertise beyond that of most human rights experts. At best, these might be seen as issues that fall within the domain of those dealing with the laws of armed conflict, rather than human rights. While the past couple of decades have seen a remarkable degree of integration of human rights and humanitarian law, even this convergence is far from complete and the overall field still tends to rely to a surprising extent upon reconciling different sources of expertise rather than on individual experts who have thoroughly mastered both areas of law. It is hardly surprising then that an understanding of robotics and other new technologies has yet to be combined with the requisite legal expertise. Third, the attractions of greater use of robotic technologies greatly overshadow, in the public mind, the potential disadvantages. The minimisation of casualties on the side of the technology wielders, and the prospect of ever-greater precision and discernment hold obvious appeal. And finally, there is a North-South dimension, in that the North has the money and the technical know-how to develop the technologies, while many of the negative consequences of their use will fall much more heavily on poorer countries in the South.

But the human rights community is not alone in being slow to focus on the ethical dimensions of robotic technologies. A recent survey asked the top 25 stakeholders in the leading professional trade group dealing with robotics, the Association for Unmanned Vehicle Systems International (AUVSI),[8] whether they could foresee ‘any social, ethical, or moral problems’ emanating from the development of unmanned systems. Sixty per cent of the respondents answered ‘No.’[9] Additional obstacles to research into the ethical or human rights dimensions of robotics include the extent to which existing funding is dominated by defence-related initiatives by either governments or private companies with a stake in the outcome, and a reluctance on the part of foundations to provide funding for research in what still appears to them to be of a speculative or at least futuristic nature.[10]

The analysis that follows is predicated on three principal assumptions. The first is that the new robotic technologies are developing very rapidly and that the unmanned, lethal weapons carrying vehicles that are currently in operation will, before very long, be operating on an autonomous basis in relation to many and perhaps most of their key functions, including in particular the decision to actually deploy lethal force in a given situation. The second is that these technologies have very important ramifications for human rights in general and for the right to life in particular, and that they raise issues that need to be addressed urgently, before it is too late. The third is that, although a large part of the research and technological innovation currently being undertaken is driven by military and related concerns, there is no inherent reason why human rights and humanitarian law considerations cannot be proactively factored into the design and operationalisation of the new technologies. But this will not happen unless and until the human rights community presses the key public and private actors to make sure it does; and because the human rights dimensions cannot be addressed in isolation, the international community urgently needs to address the legal, political, ethical and moral implications of the development of lethal robotic technologies.

2 Trends in the Development of Lethal Robotic Technology

The use of unmanned vehicle systems, including for lethal purposes, in the context of war is by no means confined to the twenty-first century.[11] As long ago as the Second World War Germany used bombs attached to tank treads which were detonated by remote control, while the United States used radio-piloted bomber aircraft packed with explosives. The perceived need for intensive aerial reconnaissance, first of the Soviet Union and later of China, also led the US Air Force in close collaboration with the Central Intelligence Agency to pour billions of dollars into research on unmanned aerial vehicles starting in the early 1960s.[12] While the subsequent history was one of competition among and within agencies, especially over the priority to be given to unmanned vehicles, manned aircraft and satellites, work on UVSs has continued more or less systematically for over half a century. But despite the extent of these precedents, the use of unmanned systems has dramatically increased since the attacks of 11 September 2001, and particularly since the start of the conflicts in Afghanistan and Iraq. These developments have been both accompanied and driven by an enormous growth in military research and development focused specifically on these systems. Military experts have noted that the two conflicts are serving as real-time laboratories of ‘extraordinary development’ for ‘robotic warfare’.[13]

There are various ways of classifying and characterising UVSs. In relation to airborne UVSs the United States Department of Defense tends to speak of Unmanned Aerial Systems (UASs), while the equivalent British Ministry speaks of ‘unmanned aircraft’ but also recommends that for public or media discourse the preferred terms are Remotely Piloted Aircraft or Remotely Piloted Aircraft System, with the latter describing the overall delivery system.[14]

Within this overall category, the most common classifications are according to function, size, or level of automation. In terms of function, there is a distinction between those UASs designed primarily for reconnaissance, force application or protection activities, although such divisions are very flexible given the multi-purposing nature of many of the vehicles. In terms of size, a distinction is sometimes drawn between three different classes, primarily according to weight. Class I systems, weighing less than 150 kg, are generally ‘employed at the small unit level or for force protection/base security’.[15] Class II, weighing between 150 kg to 600 kg, are ‘often catapult-launched, mobile systems that usually support brigade-level, and below, Intelligence, Surveillance, Target Acquisition and Reconnaissance requirements’, and generally operate at altitudes below 10,000 feet.[16] Class III, weighing more than 600 kg, operate at high altitude, have the greatest range, endurance and transit speeds. They are used for ‘broad area surveillance and penetrating attacks.’[17]

In terms of automation levels, experts generally distinguish three separate types of systems depending on the degree of human involvement in their operation and functioning. The most common are remotely controlled vehicles that, although unmanned, are nevertheless fully controlled by human operators. These so-called ‘man-in-the-loop’ systems can be operated from nearby or at a great distance. Thus some unmanned aerial vehicles are controlled by operators based thousands of miles from the scene of the activity, while others are operated at much closer range. The second category of ‘man-on-the-loop’ vehicles are automated and carry out programmed activities without needing any further human involvement once they have been launched. The third category consists of autonomous vehicles (‘man-out-of-the-loop’) which operate without human inputs in carrying out the range of functions for which they have been pre-programmed. But it is this third category that gives rise to the most significant definitional disagreements. Noel Sharkey’s contribution in this volume describes the different classifications used by the United States Navy and the United States Air Force, but both have in common a definition that requires ‘autonomous’ systems to have some attributes of human intelligence.[18] In contrast, the British Ministry of Defence, offers what it describes as a simple definition:

An autonomous system is capable of understanding higher level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.[19]

The same report adds that an autonomous system ‘must be capable of achieving the same level of situational understanding as a human.’[20] While the British definition will be difficult to satisfy, the United State approaches are eminently achievable. Thus the 2009 US Air Force review of the future reflects with confidence upon the prospect of ‘a fully autonomous capability, [involving] swarming, and Hypersonic technology to put the enemy off balance by being able to almost instantaneously create effects throughout the battlespace.’ Linked to this will be ‘[t]echnologies to perform auto air refuelling, automated maintenance, automatic target engagement, hypersonic flight, and swarming’. The report comments that the ‘end result would be a revolution in the roles of humans in air warfare’.[21]

While various countries are using and developing robotic technologies, the clear leader in the field continues to be the United States. In 2001 the US Congress set two ambitious goals for the military in relation to the use of unmanned, remotely controlled technology. It required the military to seek to ensure that, by 2010, one-third of the ‘operational deep strike force aircraft fleet’ would be unmanned and that by 2015, one-third of the operational ground combat vehicles would also be unmanned.[22] Between 2000 and 2008, the number of United States unmanned aircraft systems increased from less than 50 to over 6,000.[23] Similarly, the number of unmanned ground vehicles deployed by the United States Department of Defense increased from less than 100 in 2001 to nearly 4,400 by 2007,[24] and 8,000 by 2011.[25] The military budget estimates for the period 2009-2013 foresee expenditures of $18.9 billion on UVS activities.[26]

Private industry is also a key driver in the field as illustrated by the presence of around 100 companies working on unmanned systems technologies at one of the major defence industry conferences in September 2011.[27]

At present, the robotic weapons technologies most in use are systems that are remotely, but directly, operated by a human being. A well-known example is the ‘BomBot’, a vehicle which can be driven by remote control to an improvised explosive device, drop an explosive charge on the device, and then be driven away before the charge is detonated.[28] Another example is the Special Weapons Observation Reconnaissance Detection System (SWORDS) and its successor, the Modular Advanced Armed Robotic System (MAARS). SWORDS is a small robot that can be mounted with almost any weapon that weighs less than 300 pounds, including machine guns, rifles, grenade launchers and rocket launchers, and can travel in a variety of terrains.[29] It can be operated by remote control and video cameras from up to two miles away, and be used for street patrols and checkpoint security as well as to guard posts. MAARS is similar, but can carry more powerful weapons and can also be mounted with less-than-lethal weapons, such as tear gas.[30]

Sentry systems also exist which can patrol automatically around a sensitive storage facility or a base. The Mobile Detection Assessment and Response System (MDARS), for example, is a small robotic patrol force on wheels designed to relieve personnel of the repetitive and sometimes dangerous task of patrolling exterior areas and which can autonomously perform random patrols.[31]

In terms of aerial weaponry, the level of automation that generally exists in currently deployed systems is limited to the ability of, for example, an unmanned combat aerial vehicle or a laser-guided bomb to be programmed to take off, navigate or de-ice by itself, or with only human monitoring (as opposed to control). In June 2010, trials were held in which helicopters had carried out fully autonomous flights,[32] and later the same year test aircraft in Fort Benning, Georgia, had autonomously identified targets and taken the programmed action.[33]

For currently existing systems that have lethal capability, the choice of target and the decision to fire the weapon is made by human beings, and it is a human being who actually fires the weapon, albeit by remote control. With such weapons systems, there is, in military terminology, a ‘man in the loop’, so that the determination to use lethal force, as with any other kind of weapon, lies with the operator and the chain of command. Examples of such semi-automated weapons systems currently in use include Predator and Reaper drones[34] deployed in the conflicts in Iraq and Afghanistan by the United States and the United Kingdom, and Israeli Harpy drones. Systems that would replace this generation of technology include the Sky Warrior, an unmanned aircraft system capable of taking off and landing automatically, with the capacity to carry and fire four Hellfire missiles.[35]

‘Swarm’ technologies are also being developed to enable a small number of military personnel to control a large number of machines remotely. One system under development envisions that a single operator will monitor a group of semi-autonomous aerial robotic weapons systems through a wireless network that connects each robot to others and to the operator. Each robot within a ‘swarm’ would fly autonomously to a designated area, and ‘detect’ threats and targets through the use of artificial intelligence, sensory information and image processing.[36]

Robotic technology is also becoming faster and more capable of increasingly rapid response. Military strategic documents predict the development of technology that speeds up the time needed for machines to respond to a perceived threat with lethal force to micro or nanoseconds. Increasingly humans will no longer be ‘in the loop’ but rather ‘on the loop’ — monitoring the execution of certain decisions.[37] The speed of the envisioned technology would be enhanced by networking among unmanned machines which would be able to ‘perceive and act’ faster than humans can.

From a military perspective, autonomous robots have important attractions. Building on Sharkey’s analysis[38] there are various reasons why they may be preferred to the current generation of weapons. First, autonomous systems may be less expensive to manufacture and they require fewer personnel to run them. Second, reliance upon satellite or radio links to pilot remote-controlled vehicles makes them vulnerable to interference with the relevant frequencies. Third, there are economies of scale in operating multiple autonomous vehicles at the same time through the same system. Fourth, the 1.5 second delay involved in communicating directly through remote-control technology is problematic in the context of air-to-air combat. An autonomous machine, on the other hand, can make rapid decisions and implement them instantly.

To date, armed robotic systems operating on any more than a semi-automated basis have not been used against targets. Senior military personnel in key user states such as the United Kingdom have suggested that humans will, for the foreseeable future, remain in the loop on any decisions to use lethal force.[39] This is also the line generally taken by the United States Department of Defense. A major recent review concluded that for a significant period into the future, the decision to pull the trigger or launch a missile from an unmanned system will not be fully automated, but it noted that many aspects of the firing sequence will be, even if the final decision to fire will not likely be fully automated ‘until legal, rules of engagement, and safety concerns have all been thoroughly examined and resolved’.[40] But these official policy statements appear in a different light when one listens to the views of the personnel more closely involved in these programs, such as when they are addressing enthusiastic audiences in the context of robotics industry gatherings. For example, Lt Gen Rick Lynch, commander of the US Army Installation Management Command, stated in August 2011 that he was ‘an advocate of autonomous vehicle technology. ... There’s a place on the battlefield for tele-operated systems, [but] we have to continue to advocate for pursuit of autonomous vehicle technology.’[41] And most of those working in the industry seem to be firmly convinced that the advent of autonomous lethal robotic systems is well under way and that it is really only a matter of time before autonomous engagements of targets takes place on the battlefield.[42] A number of countries are already reportedly deploying or developing systems with the capacity to take humans out of the lethal decision-making loop. For example:

• Since approximately 2007, Israel has deployed remote-controlled 7.62 mm machine-guns mounted on watchtowers every few hundred yards along its border with Gaza as part of its ‘Sentry Tech’ weapons system, also known as ‘Spot and Shoot’ or in Hebrew, ‘Roeh-Yoreh’ (Sees-Fires).[43] This ‘robotic sniper’ system locates potential targets through sensors, transmits information to an operations command centre where a soldier can locate and track the target and shoot to kill.[44] Dozens of alleged ‘terrorists’ have been shot with the Sentry Tech system.[45] The first reported killing of an individual with Sentry Tech appears to have taken place during Operation Cast Lead in December 2008.[46] Two alleged ‘terrorists’ were killed by the system in December 2009,[47] and another person was killed and four injured by Sentry Tech in March 2010; according to media accounts it is unclear whether the dead and injured were farmers or gunmen.[48] Future plans envision a ‘closed loop’ system, in which no human intervention would be required in the identification, targeting and kill process.[49]

• The Republic of Korea has developed the SGR-1, an unmanned gun tower that, beginning in July 2010, is performing sentry duty on an experimental basis in the demilitarised zone between the Democratic People’s Republic of Korea and the Republic of Korea.[50] The SGR-1 uses heat and motion detectors and pattern recognition algorithms to sense possible intruders; it can alert remotely located command centre operators who can use the SGR-1’s audio and video communications system to assess the threat and make the decision to fire the robot’s 5.5 millimetre machine gun.[51] Media accounts indicate that, although the decision to use lethal force is made now by human commanders, the robot has been equipped with the capacity to fire on its own.[52]

Such automated technologies are becoming increasingly sophisticated, and artificial intelligence reasoning and decision-making abilities are actively being researched and receive significant funding. States’ militaries and defence industry developers are working to develop ‘fully autonomous capability’, such that technological advances in artificial intelligence will enable unmanned aerial vehicles to make and execute complex decisions, including the identification of human targets and the ability to kill them.[53] A 2003 study commissioned by the United States Joint Forces Command reportedly predicted the development of artificial intelligence and automatic target recognition that will give robots the ability to hunt down and kill the enemy with limited human supervision by 2015.[54] Among the envisioned uses for fully automated weapons systems are: crowd control systems that range from non-lethal through to lethal; dismounted offensive operations; and armed reconnaissance and assault operations.[55] One already developed ground robot, the Guardium UGV, is a high-speed vehicle that can be weaponised and used for combat support as well as border patrols and other security missions, such as perimeter security at airports and power plants.[56]

The United States has also recently stepped up its commitment to robotics through several initiatives that are likely to expand and expedite existing trends in this area. In June 2011 President Obama launched the National Robotics Initiative in an effort to ‘accelerate the development and use of robots in the United States’ on the basis of a partnership among various federal government agencies including the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), the National Institutes of Health (NIH), and the US Department of Agriculture (USDA). The objective is ‘to advance the capability and usability’ of next generation robotics systems and ‘to encourage existing and new communities to focus on innovative application areas’ by inter alia addressing ‘the entire life cycle from fundamental research and development to industry manufacturing and deployment.’[57] At the same time, Obama launched the Advanced Manufacturing Partnership that will invest $500 million in manufacturing and emerging technologies, and commits at least $70 million of that amount to research in the robotics and unmanned systems industries. While much of the official rhetoric focuses on the peaceful uses of such technologies, the President effectively acknowledged the double-edged sword involved when he wryly observed at the time that ‘[t]he robots you make here seem peaceful enough for now.’[58]

While the United States is the central player in both the research and deployment of robotic military technologies, the survey above illustrates the fact that it is by no means alone. Various other States members of the Organization for Economic Cooperation and Development (the OECD) have also developed or are developing unmanned systems, including Australia, Canada, France, Germany, Israel, the Republic of Korea and the United Kingdom.[59] In addition, over fifty countries have reportedly either purchased surveillance drones or begun their own programs. Given the flexibility and adaptability of drones, those designed for surveillance can readily be used for lethal activities as well. In terms of independent initiatives, Iran has already showcased an unmanned aerial vehicle with a range of 1,000 kms (620 miles), capable of carrying four cruise missiles.[60] China has begun a major program which yielded 25 different models of unmanned aerial vehicles on display in November 2010. As the Wall Street Journal reported at the time, ‘the large number of UAVs on display illustrates clearly that China is investing considerable time and money to develop drone technology, and is actively promoting its products on the international market.’[61] These include ‘attack drones’ being sold to countries such as Pakistan.[62] And more recently, non-state actors have begun to assert their capacity to make use of robotic technologies. The Ansar al-Islam militant group in Iraq has released an online video demonstrating its ability to produce and deploy crude robotic technology.[63]

3 Concerns

Despite the astonishing speed at which the robotic technologies have developed either for lethal purposes or with clear lethal capacity, the public debate over the legal, ethical and moral issues arising from their use is at a very early stage, and little consideration has been given to the international legal framework necessary for dealing with the resulting issues.

There are many possible advantages flowing from the use of existing and developing technologies.[64] They may be able to act as ‘force multipliers’, greatly expanding the capacity or reach of a military, and robots may be sacrificed or sent into hazardous situations that are too risky for human soldiers. They may be less economically costly than deploying humans, and, indeed, their destruction does not result in the ending of irreplaceable human life. As noted in a United States Government report, more and more robots are being destroyed or damaged in combat instead of servicemen and women being killed or wounded, and this is the preferred outcome.[65] Robots may be able to use lethal force more conservatively than humans (because they do not need to have self-preservation as a foremost drive),[66] and their actions and responses may be faster, based on information processed from more sources, and more accurate, enabling them to reduce collateral damage and other mistakes made by humans. They may also be able to avoid mistakes or harm resulting from human emotions or states, such as fear, tiredness, and the desire for revenge, and, to the extent that machines are equipped with the ability to record operations and monitor compliance with legal requirements, they may increase military transparency and accountability.

But these hypothetical advantages may not necessarily be reflected in the design or programming of actual technologies. And the reality, to date, is that technological developments have far outpaced even discussions of the humanitarian and human rights implications of the deployment of lethal robotic technologies.

Kenneth Anderson has characterised the turn to robotic technologies by the United States as one of the unintended consequences of the increasing reach and effectiveness of the evolving regime of international criminal law. In his view, the emphasis on robotics has been driven, in significant part, by the need to respond to what he sees as the loss of reciprocity in international humanitarian law, especially in the context of asymmetric warfare. He gives the two standard examples to illustrate these trends: the use of human shields, and the tendency of some belligerent forces to conceal themselves in the midst of civilian populations. In terms of the latter problem, he seems not to be referring to the use of CIA or other Special Operations forces operating outside of uniform in civilian dominated areas, but rather to the practices of groups such as the Taliban and al-Qaeda with whom the United States is engaged in an armed conflict. Although he does not very clearly spell out the causal chain that explains how it is that international criminal law has generated ‘pressures to create whole new battlefield technologies’, there seem to be two elements.[67] The first is that the new technologies facilitate more precise and accurate targeting and thus diminish the likelihood of allegations that the principles of distinction or proportionality have been violated. The second element consists of ‘emerging interpretations of law governing detention, interrogation, and rendition’ which create strong incentives to kill rather than capture individuals who are suspected of participating in hostilities. He explains this link by observing that ‘targeted killing via a stand-off robotic platform [is] legally less messy than the problems of detention.’[68] To the extent that there is an implication that the applicable law and the principles governing both the lethal use of force and targeting are more easily circumvented by using remote technology, this justification for the move to such approaches should raise alarm bells within the humanitarian community.

Many concerns arise out of the move to robotic technologies for lethal purposes, and the following list is no more than an initial survey of the issues that require far more in-depth examination in the years ahead.[69]

3.1 The Problem of Definitions

As noted above,[70] an initial hurdle in addressing the legal and ethical ramifications of these technologies concerns the lack of a uniform set of definitions of key terms such as ‘autonomous’, ‘autonomy’ or ‘robots’. Uses of these terms vary significantly among the militaries of different States, as well as among defence industry personnel, academics and civilians.[71] Confusion can result, for example, from differences over whether ‘autonomous’ describes the ability of a machine to act in accordance with moral and ethical reasoning ability, or whether it might simply refer to the ability to take action independent of human control (eg a programmed drone that can take off and land without human direction; a thermometer that registers temperatures).[72] As the international community begins to debate robotic technologies, it will need to at least seek a shared understanding of the systems and their characteristics.

3.2 International and criminal responsibility

One of the most important issues flowing from increased automation is the question of responsibility for civilian casualties or other harm or violations of the laws of war. As analysed at length in various prior reports by the Special Rapporteur on extrajudicial, summary or arbitrary executions,[73] international human rights and humanitarian law, as applied in the context of armed conflict or law enforcement, set standards that are designed to protect or minimise harm to civilians, and set limits on the use of force by States’ militaries, police or other armed forces. When these limits are violated, States may be internationally responsible for the wrongs committed, and officials or others may bear individual criminal responsibility. Both the international human rights and humanitarian law frameworks are predicated on the fundamental premise that they bind States and individuals, and seek to hold them to account. Where robots are operated by remote control and the ultimate decision to use lethal force is made by humans, individual and command responsibility for any resulting harm is generally readily determinable.

However, as automation increases, the frameworks of State and individual responsibility become increasingly difficult to apply. Who is responsible if a robot kills civilians in violation of applicable international law? The programmer who designed the program governing the robot’s actions, any military officials who may have approved the programming, a human commander assigned responsibility for that robot, a soldier who might have exercised oversight but opted not to do so? What if the killing is attributed to a malfunction of some sort? Is the government which deployed the robot responsible, or the principal engineer or manufacturer, or the individual who bore ultimate responsibility for programming, or someone else? What level of supervision does a human need to exercise over a robot in order to be responsible for its actions? Are circumstances conceivable in which robots could legitimately be programmed to act in violation of the relevant international law, or conversely, could they be programmed to automatically override instructions that they consider, under the circumstances, to be a violation of that law? Are there situations in which it would be appropriate to conclude that no individual should be held accountable, despite the clear fact that unlawful actions have led to civilian or other deaths?

3.3 Ethical dimensions

Some argue that robots should never be fully autonomous — that it would be unethical to permit robots to autonomously kill, because no human would clearly be responsible, and the entire framework of accountability would break down. Others, such as Ronald Arkin, argue that it will be possible to design ethical systems of responsibility.[74] In his view, robots could be better ethical decision-makers than humans because they lack emotion and fear, and could be programmed to ensure compliance with humanitarian law standards and applicable rules of engagement. Still others respond that such thinking is predicated on unproven assumptions about the nature of rules and how robots may be programmed to understand them, and that it underestimates the extent to which value systems and ethics inform the application of the rules in ways that robots cannot replicate.[75] In order to understand how to apportion responsibility for violations of the law, say some ethicists, more research needs to be done both to understand how and why humans themselves decide to follow the law and ethical rules, as well as the extent to which robotic programming mimics or differs from human decision-making.

To the extent that unmanned systems are not being designed to support investigation, they raise additional transparency and accountability concerns. Perhaps most troublingly from an international law perspective, some have indicated that unmanned systems are not designed to support investigation. They do not archive information. They leave open the possibility of soldiers pointing to the machine, declaring, ‘I’m not responsible — the machine is’.[76] In order to comport with States’ international law obligation to provide accountability for the use of lethal force, any unmanned weapons system, regardless of the degree of automation, must not hinder — and indeed should facilitate — States’ ability to investigate wrongful conduct.

3.4 Safeguards and standards for deployment

Another significant problem concerns the ability of robots to comply with human rights and humanitarian law, and the standards relevant to programming and the development of technology for deployment. What standards or testing must be conducted before armed machines are able to conduct crowd control, patrol in civilian populated areas, or be enabled to decide to target an alleged combatant? While any kind of technology has the potential to malfunction and result in lethal error, the particular concern with the rapid development of robotic weapons is whether — and the extent to which — technical safeguards are built into the systems to prevent the inadvertent or otherwise wrongful or mistaken use of lethal force. What programming or other technical safeguards have been and should be put in place to ensure that the precautions required by international humanitarian law are taken? What programming safeguards would international humanitarian law require?

In part this debate will revolve around the interpretation given to the requirements specified in Article 36 of Additional Protocol I to the Geneva Conventions that provides:

In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.[77]

Since the United States is not a party to this treaty, the issue thus becomes whether its requirements have become part of customary law. Troublingly, military and civilian experts acknowledge that robotic development in general is being driven by the defence industry, and that few systems in the field have been subjected to rigorous or standardised testing or experimentation.[78] The United States military, for example, admits that in the interests of saving military lives in the conflicts in Iraq and Afghanistan, robotic systems may be deployed without the requisite testing for whether those systems are, in fact, reliable.[79]

3.5 The principle of distinction

In the context of armed conflict generally, and especially in urban areas, military personnel often have difficulty discriminating between those who may be lawfully targeted — combatants or those directly participating in hostilities — and civilians, who may not. Such decision-making requires the exercise of judgement, sometimes in rapidly changing circumstances and in a context which is not readily susceptible of categorisation, as to whether the applicable legal requirements of necessity and proportionality are met, and whether all appropriate precautions have been taken. It is not clear what criteria would be used to determine whether a robot is ever capable of making such decisions in the manner required, or how to evaluate the programs that might purport to have integrated all such considerations into a given set of instructions to guide a robotic technology.

In addition, there is the concern that the development of lethal capacity has outpaced the development both of safeguards against technical or communications error. For example, military strategic planning documents caution that it ‘may be technically feasible’ for unmanned aerial systems to have nuclear strike capability before safeguards are developed for the systems, and that ethical discussions and policy decisions must take place in the near term in order to guide the development of future unmanned aerial systems capabilities, rather than allowing the development to take its own path.[80]

There are also questions about how and when the benefits of speedy processing of intelligence and other data are outweighed by the risks posed by hasty decision-making. Man-on-the-loop systems, for instance, raise the concern that technology is being developed that is beyond humans’ capacity to supervise effectively and in accordance with applicable law. With respect to swarm technologies, some research has found that human operators’ performance levels are reduced by an average of 50 per cent when they control even two unmanned aircraft systems at a time.[81] The research suggests that the possibility of lethal error rises as humans play a ‘supervisory’ role over a larger number of machines. Unless adequate precautions are taken and built into systems, the likelihood increases that mistakes will be made which will amount to clear violations of the applicable laws.

A related concern is what safeguards should or must be put in place to prevent ultimate human control of robots from being circumvented, and what safeguards can be implemented to prevent lethal robots from being hacked or used by, for example, insurgent or terrorist groups.

3.6 Civilian support

An important political consideration is whether the widespread use of robots in civilian settings, such as for law enforcement in cities, or in counter-insurgency operations, would alienate the very populations they were meant to assist. Over-reliance on technology increases the risk that policymakers and commanders will focus on the relatively easy use of armed or lethal tactics to the detriment of all the other elements necessary to end a conflict, including winning hearts and minds, and that policymakers will overestimate the ability of new technologies to achieve sustainable peace. In addition, while robots may have the benefit of not acting based on emotion, they also do not have the kind of sympathy, remorse or empathy that often appropriately tempers and informs the conduct of fighters and their commanders.

3.7 Use of force threshold and jus ad bellum considerations

To the extent that decisions about whether to go to war are limited by the prospect of the loss of the lives of military personnel, and the high economic cost of warfare, robotic armies may make it easier for policymakers to choose to enter into an armed conflict, increasing the potential for violating jus ad bellum requirements. This may be particularly the case where the other side lacks the same level of technology. Similarly, within the context of armed conflict, insofar as robots are remotely controlled by humans who are themselves in no physical danger, there is the concern that an operator’s location far from the battlefield will encourage a Playstation mentality to fighting and killing, and the threshold at which, for example, drone operators would be willing to use force could potentially decrease. Thus, the international community should consider whether and when reduced risk to a States’ armed forces resulting from the extensive use of robotic technologies might unacceptably increase the risk to civilian populations on the opposing side. While United States commentators have tended to reject these linkages, the most recent British Ministry of Defence report asks specifically whether ‘[i]f we remove the risk of loss from the decision-makers’ calculations when considering crisis management options, do we make the use of armed force more attractive?’ In support of a potentially affirmative response it quoted a comment made by General Robert E Lee following the Battle of Fredericksburg, which involved very heavy casualties on both sides: ‘It is well that war is so terrible — otherwise we would grow too fond of it.’[82]

4 Responses to Date by the Key International Actors

I noted at the outset of this article that the key actors within the international human rights regime have tended to be reticent about tackling emerging issues especially in relation to technological developments. This reticence is on full display in relation to the development of potentially autonomous killing machines. Without seeking to present an exhaustive accounting of what has or has not been done to date, it is instructive to note the positions adopted by some of the principal actors and, in particular, governments, the United Nations, and the key NGOs, including the International Committee of the Red Cross (ICRC).

4.1 Governments and the United Nations political bodies

In October 2010, these issues were presented to the United Nations General Assembly in a report that I prepared in my capacity as UN Special Rapporteur on extrajudicial executions.[83] In the report I recommended that urgent consideration should be given to the legal, ethical and moral implications of the development and use of robotic technologies, especially but not limited to uses for warfare. I suggested that the emphasis should be not only on the challenges posed by such technological advances, but also on the ways in which proactive steps can be taken to ensure that such technologies are optimised in terms of their capacity to promote more effective compliance with international human rights and humanitarian law. The report also called for greater definitional uniformity in relation to the types of technology being developed, and underlined the need for empirical studies to better understand the human rights implications of the technologies. In addition, it called for more reflection on the question of whether lethal force should ever be permitted to be fully automated.

In response to these recommendations, which were presented in the context of a report that also dealt with other issues, the delegations of Canada,[84] Liechtenstein,[85] and Switzerland[86] addressed the issue of robotics and human rights, but only in passing. For its part, the United States reiterated its view that matters which it classified as belonging solely to armed conflicts should not be discussed in human rights forums such as the UN Human Rights Council or the UN General Assembly.[87] No substantive debate ensued and the General Assembly very clearly declined to take any action on the relevant recommendations contained in the report, despite the fact that many of the other issues identified in that and earlier reports were reflected in the resolution that was ultimately adopted in December 2010.[88]

Lacking an authorisation to pursue the issue, and in the absence of any significant civil society pressure to do something, the Office of the High Commissioner for Human Rights has yet to take any measures to address the issues surrounding robotic technologies. I will return below to the Office’s important potential role in this area.

4.2 Expert human rights bodies

For good reasons, most expert human rights bodies, whether courts or committees, tend to deal with issues in retrospect, or in other words after violations of applicable norms are alleged to have taken place. There are, however, powerful reasons why issues relating to robotic technologies should already be on the radar screens of bodies such as the European Court of Human Rights and the Human Rights Committee.

While the United States Government, almost alone, continues to contest the relevance of human rights norms in situations that the government itself characterises as involving an armed conflict, and while a handful of scholars support such views,[89] the reality is that human rights bodies are already engaged in significant ways in such issues and are insisting upon a nuanced approach to the issue.[90]

4.3 NGOs, including the ICRC

Of the major international human rights NGOs, neither Amnesty International nor Human Rights Watch (HRW) has taken up the issue of robotic technologies. HRW at least has a separate program focused on the relationship between arms and human rights, but it has tended to focus almost exclusively on the issues of landmines and cluster munitions.[91]

The most important initiative to date has been taken by a group of academics and practitioners from a range of relevant disciplines who, in 2009, created an International Committee for Robot Arms Control (ICRAC). The organisation’s mission statement indicates that its principal goal is to stimulate a wide-ranging discussion on the risks presented by the development of military robotics and the desirability of what they term an ‘arms control regime’ to reduce the resulting threat.[92] In an endeavour to spell out more clearly what this might mean in practice the Committee organised an expert workshop in September 2010 on ‘limiting armed tele-operated and autonomous systems’. The workshop adopted, by majority vote rather than consensus, a statement that underscored the long-term risks of these developments and asserted that it is ‘unacceptable for machines to control, determine, or decide upon the application of force or violence in conflict or war’ and insisted that there should always be a human being responsible and accountable for any such decisions. The group also noted that state sovereignty is violated as much by an unmanned vehicle as it is by a manned one. It also endorsed the view that the these systems help to accelerate the pace and tempo of warfare, and that they ‘encourages states, and non-state actors, to pursue forms of warfare that reduce the security of citizens of possessing states.’ The statement concluded by calling for the creation of a new arms control regime designed to regulate the use of such systems.[93] I shall return below to that proposal.

By far the most important NGO in this field is the International Committee of the Red Cross which has been central at every turn in terms of developing, promoting, implementing, and monitoring standards governing the use of weapons in the context of armed conflict. To date, it has said surprisingly little about robotic technologies. In September 2011 the President of the ICRC delivered a speech in which he acknowledged the need to focus on robotic weapons systems in the future.[94] His analysis succeeded in leaving almost all positions open. He began by evincing a degree of scepticism as to whether the development of truly autonomous systems will prove possible. But he immediately went on to acknowledge that if they do come to pass, they will ‘reflect a paradigm shift and a major qualitative change in the conduct of hostilities’ as well as raising ‘a range of fundamental legal, ethical and societal issues’. He determinedly kept open the possibility that such systems might be capable of enhancing the level of protection available by noting that ‘cyber operations or the deployment of remote-controlled weapons or robots might cause fewer incidental civilian casualties and less incidental civilian damage compared to the use of conventional weapons.’ He pointed in particular to the possibility that a greater level of precaution could be possible in practice because of the nature of such weapons and the contexts in which they might be deployed. His approach thus reflected a characteristically prudent response which left open all possibilities and ended by calling for more discussion, with an important role to be played by the ICRC:

[I]t is important for the ICRC to promote the discussion of these issues, to raise attention to the necessity to assess the humanitarian impact of developing technologies, and to ensure that they are not prematurely employed under conditions where respect for the law cannot be guaranteed.[95]

The President did not suggest at any point that new legal standards or mechanisms might be needed, nor did he make any specific proposals designed to move the agenda forward. But the fact that the challenge provided by robotic technologies was acknowledged, and addressed in at least some detail, constitutes an important advance.

5 The Way Forward

The states which are at the forefront of developing lethal robotic technologies are not the ones that are going to take the initiative in addressing the legal and ethical issues within a framework governed by international law and monitored by independent organisations. Nor are the states that are the targets, or victims, of the use of such technologies likely to be well placed to lead an effort to stimulate international reflection upon the challenges that arise. As a result, the responsibility will fall inevitably upon an international actor to take the lead. The International Committee of the Red Cross is undoubtedly the key player, and it has now, somewhat belatedly, begun to explore the issues. There is, however, also an important role to be played by the United Nations as the principal proponent of international human rights law. Accordingly it would be highly appropriate and desirable for the UN Secretary-General to convene a group of military and civilian representatives from governments, as well as leading authorities in human rights and humanitarian law, applied philosophers and ethicists, scientists and developers to advise on measures and guidelines designed to promote the goal of accountability in relation to the development of these new technologies. The task of the group would be to consider what approaches might be adopted to ensure that such technologies will comply with applicable human rights and humanitarian law requirements. This would include: consideration of the principle that any unmanned or robotic weapons system should have the same, or better, safety standards as a comparable manned system; the spelling out of requirements for testing the reliability and performance of such technologies before their deployment; and measures designed to ensure the inclusion in the development of such weapons systems of recording systems and other technology that would permit effective investigation of and accountability for alleged wrongful uses of force.


[*] John Norton Pomeroy Professor of Law, New York University School of Law. The author was UN Special Rapporteur on extrajudicial, summary or arbitrary executions from 2004 until 2010 and this article draws on work originally done in that context. I am very grateful to Hina Shamsi and Sarah Knuckey for their assistance.

[1] See generally, Philip Alston, ‘The CIA and Targeted Killings beyond Borders’ (2011) 2 Harvard National Security Journal 283.

[2] The technical literature breaks down the different categories of UVS according to the environment in which they are used. Thus unmanned ground vehicles (UGVs) are used on land, unmanned air systems (UASs) in the air, and unmanned maritime systems (UMS) in the water. For a detailed description of the different types of systems under development by the United States military see, Office of the Secretary of Defense, Department of Defense, FY 2009–2034 Unmanned Systems Integrated Roadmap, 6 April 2009, Annex H, 193,

<www.acq.osd.mil/psa/docs/UMSIntegratedRoadmap2009.pdf> (‘DOD Roadmap 2009’).

[3] For a highly accessible overview see P W Singer, Wired for War: The Robotics Revolution and Conflict in the 21st Century (Penguin, 2009); see also Steve Featherstone, ‘The Coming Robot Army’, Harpers (February 2007).

[4] See, for example, Summary of Harvard Executive Session of June 2008, Unmanned and Robotic Warfare: Issues, Options and Futures, 14

<www.boozallen.com/media/file/Unmanned_and_Robotic_Warfare.pdf> (‘2008 Harvard Session’); Ronald Arkin, Governing Lethal Behaviour in Autonomous Robots (CRC Press, 2009); Peter Asaro, ‘How Just Could a Robot War Be?’ in Adam Briggle, Katinka Waelbers and Philip Brey (eds), Current Issues in Computing and Philosophy (IOS Publishing, 2009); William H Boothby, Weapons and the Law of Armed Conflict (Oxford University Press, 2009); Jason Borenstein, ‘The Ethics of Autonomous Military Robots’ (2008) 2(1) Studies in Ethics, Law and Technology

<http://www.degruyter.com/view/j/selt.2008.2.1/selt.2008.2.1.1036/selt.2008.2.1.1036.xml?format=INT> Charles J Dunlap, Jr, ‘Technology: Recomplicating Moral Life for the Nation’s Defenders’ (1999) 24 Parameters: US Army War College Quarterly 24; Noel Sharkey, ‘Automated Killers and the Computing Profession’ (2007) 40(11) Computer 124, doi: 10.1109/MC.2007.372; Noel Sharkey, ‘Death Strikes from the Sky: The Calculus of Proportionality’ (2009) 28(1) IEEE Technology and Society 16, doi: 10.1109/MTS.2009.931865; Robert Sparrow, ‘Robotic Weapons and the Future of War’ in Paolo Tripodi and Jessica Wolfendale (eds), New Wars and New Soldiers: Military Ethics in the Contemporary World (Ashgate, 2011); Robert Sparrow, ‘Predators or Plowshares? Arms Control of Robotic Weapons’ (2009) 28(1) IEEE Technology and Society 25, doi: 10.1109/MTS.2009.931862; Patrick Lin, George Bekey and Keith Abney, Autonomous Military Robotics: Risk, Ethics, and Design (report prepared for the United States Department of the Navy, 2008) <http://ethics.calpoly.edu/ONR_report.pdf> .

[5] This contrast was commented upon in Kenneth Anderson, ‘The Rise of International Criminal Law: Intended and Unintended Consequences’ (2009) 20 European Journal of International Law 331, 345, n 29.

[6] Three Laws of Robots (2012) Wikipedia

<http://en.wikipedia.org/wiki/Three_Laws_of_Robotics> .

[7] Ibid.

[8] The Association identifies itself as ‘the world's largest non-profit organization devoted exclusively to advancing the unmanned systems and robotics community.’ See AUVSI, FAQ for Media (2012)

<http://www.auvsi.org/news/mediaresources/faqformedia/> .

[9] The survey, conducted by Kendall Haven, is reported in Peter Singer, ‘The Ethics of Killer Applications: Why is it So Hard to Talk About Morality When it Comes to New Military Technology?’ (2010) 9(4) Journal of Military Ethics 299, 301, doi: 10.1080/15027570.2010.536400.

[10] Ibid 304-306.

[11] See generally Thomas P Ehrhard, Air Force UAVs: The Secret History (Mitchell Institute for Airpower Studies, 2010).

[12] Ehrhard notes that the CIA, along with its sister agencies, such as the top-secret National Reconnaissance Office, funded more than 40 per cent of the total investment in research on unmanned aerial vehicles from 1960 through 2000. Ibid 5.

[13] 2008 Harvard Session, above n 4, 2.

[14] UK Ministry of Defence, Joint Doctrine Note 3/10 Unmanned Aircraft Systems: Terminology, Definitions and Classification (May 2010)

<http://www.mod.uk/NR/rdonlyres/FBC33DD1-C111-4ABD-9518-A255FE8FCC5B/0/JDN310Amendedweb28May10.pdf> (‘UK Joint Doctrine Note’).

[15] Ibid 2-5.

[16] Ibid.

[17] Ibid 2-6.

[18] Noel Sharkey, ‘Automating Warfare: Lessons Learned from the Drones’ (2011) 21(2) Journal of Law Information and Science, doi: 10.5778/JLIS.2011.21.Sharkey.1

[19] UK Joint Doctrine Note, above n 14, 2-3.

[20] Ibid.

[21] United States Air Force, Unmanned Aircraft Systems Flight Plan 2009-2047 (2009) 50, <http://www.fas.org/irp/program/collect/uas_2009.pdf> (‘Unmanned Aircraft Systems Flight Plan 2009-2047’).

[22] National Defense Authorization Act for Fiscal Year 2001, Pub L No 106–398, § 220(a), 114 Stat 1654, 1654A-38 (2000).

[23] See United States Government Accountability Office, Unmanned Aircraft Systems: Additional Actions Needed to Improve Management and Integration of DOD Efforts to Support Warfighter Needs (November 2008) Report to the Subcommittee on Air and Land Forces, Committee on Armed Services, House of Representatives <http://www.gao.gov/new.items/d09175.pdf> .

[24] Department of Defense, Report to Congress: Development and Utilization of Robotics and Unmanned Ground Vehicles (October 2006), 11,

<http://www.techcollaborative.org/files/JGRE_UGV_FY06_Congressional_Report.pdf> (‘Development and Utilization of Robotics and Unmanned Ground Vehicles’); US law requires that, by 2015, one third of US operational ground combat vehicles be unmanned. See Office of the Secretary of Defense, Unmanned Systems Roadmap 2007-2032 (2007) 6,

<http://www.fas.org/irp/program/collect/usroadmap2007.pdf> For fiscal year 2010, the US Department of Defense sought a budget of $5.4 billion for unmanned systems (including systems for use on land, in the air, and at sea), an increase of 37.5 per cent over the past two years. See ‘Pentagon’s Unmanned Systems Spending Tops $5.4 billion in FY2010’, Defense Update (online) 14 June 2009, <http://defense-update.com/newscast/0609/news/pentagon_uas_140609.html> .

[25] Joan Michel, ‘Unmanned Ground Vehicles in ISR Missions’ (2011) 1(3) Tactical Intelligence, Surveillance and Reconnaissance 5,

<http://www.kmimediagroup.com/files/TISR_1-3_final.pdf> .

[26] DOD Roadmap 2009, above n 2, 4.

[27] Danielle Lucey, Unmanned Systems Make a Bang at DSEi [Defence and Security Equipment International] 2011 (13 September 2011) Association for Unmanned Vehicle Systems International <http://www.auvsi.org/news/> .

[28] Development and Utilization of Robotics and Unmanned Ground Vehicles, above n 24, 12.

[29] Singer, above n 3, 29-32.

[30] Ibid; see also Seth Porges, ‘Real Life Transformer Could Be First Robot to Fire in Combat’ Popular Mechanics (1 October 2009)

<http://www.popularmechanics.com/technology/military/4230309> .

[31] MDARS — 21st Century Robotic Sentry System, General Dynamics Robotics Systems, <http://www.gdrs.com/about/profile/pdfs/0206MDARSBrochure.pdf> .

[32] Olivia Koski, ‘In a First, Full-Sized Robo-Copter Flies With No Human Help’, Wired (online), 14 July 2010 <http://www.wired.com/dangerroom/2010/07/in-a-first-full-sized-robo-copter-flies-with-no-human-help/> .

[33] Peter Finn, ‘A Future for Drones: Automated Killing’, Washington Post (online), 19 September 2011,

<http://www.washingtonpost.com/national/national-security/a-future-for-drones-automated-killing/2011/09/15/gIQAVy9mgK_story.html> .

[34] Unmanned Aircraft Systems Flight Plan 2009-2047, above n 21, 26.

[35] See descriptions at General Atomics Aeronautical, Sky Warrior (2012) <http://www.ga-asi.com/products/aircraft/er-mp-uas.php> Defense Update, Sky Warrior Goes into Production to Equip U.S. Army ER/MP Program (9 July 2010) <http://www.defence-update.net/wordpress/20100709_sky_warrior_lrip.html> .

[36] Unmanned Aircraft Systems Flight Plan 2009-2047, above n 21, 33; A group of European firms, lead by Dassault, is developing similar technology for the European market: Erik Sofge, ‘Top 5 Bomb-Packing, Gun-Toting War Bots the U.S. Doesn’t Have’, Popular Mechanics (online), 1 October 2009,

<http://www.popularmechanics.com/technology/military/4249209> .

[37] Unmanned Aircraft Systems Flight Plan 2009-2047, above n 21, 41.

[38] Noel Sharkey, ‘Saying “No!” to Lethal Autonomous Targeting’ (2010) 9 Journal of Military Ethics 369, 377-378.

[39] British Air Marshal Steve Hillier sees ‘an enduring requirement for a human in the loop for decision-making. When you get to attack, you need someone to exercise judgement’. Quoted in Craig Hoyle, Farnborough: UK Unmanned Air Vehicles (2010) <http://www.flightglobal.com/articles/2010/07/13/344077/farnborough-uk-unmanned-airvehicles.html> .

[40] DOD Roadmap 2009, above n 2.

[41] Cheryl Pellerin, ‘Robots Could Save Soldiers’ Lives, Army General Says’, American Forces Press Service (online), 17 August 2011,

<http://www.defense.gov/news/newsarticle.aspx?id=65064> .

[42] See Singer, above n 9; and Ronald C Arkin, Alan R Wager and Brittany Duncan, ‘Responsibility and Lethality for Unmanned Systems: Ethical Pre-mission Responsibility Advisement’ (GVU Technical Report GIT-GVU-09-01, GVU Center, Georgia Institute of Technology, 2009).

[43] Robin Hughes and Alon Ben-David, ‘IDF Deploys Sentry Tech on Gaza Border’, Jane’s Defence Weekly (6 June 2007).

[44] Noah Schachtman, ‘Robo-Snipers, “Auto Kill Zones” to Protect Israeli Borders’, Wired, 4 June 2007

<http://www.wired.com/dangerroom/2007/06/for_years_and_y/> .

[45] Anshell Pfeffer, ‘IDF’s Newest Heroes: Women Spotters on Gaza’s Borders’, Haaretz (online) 3 March 2010

<http://www.haaretz.com/print-edition/news/idf-s-newest-heroeswomen-spotters-on-gaza-border-1.264024> .

[46]Israeli War-Room “Look-Out” Girls Use New “See-Shoot” Remote Control’, BBC Monitoring Middle East, 9 January 2009.

[47] Yaakov Katz, ‘IDF Unveils Upgrades to Gaza Fence’, Jerusalem Post (online), 3 March 2010 <http://www.jpost.com/Israel/Article.aspx?id=170041> .

[48] Ali Waked, ‘Palestinians: 1 Dead, 4 Injured From IDF Fire in Gaza’, Ynet News (online), 1 March 2010,

<http://www.ynetnews.com/articles/0,7340,L-3856218,00.html> .

[49] ‘Remotely Controlled Mechanical Watchtowers Guard Hostile Borders”, Homeland Security Newswire (online), 19 July 2010

<http://homelandsecuritynewswire.com/remotely-controlled-mechanical-watch-towers-guard-hostile-borders> Schachtman, above n 44; Jonathan Cook, ‘Israel Paves the Way for Killing by Remote Control’, The National (Abu Dhabi), 13 July 2010.

[50] Kim Deok-hyun, ‘Army Tests Machine-gun Sentry Robots in DMZ’, Yonhap News Agency (online), 13 July 2010,

<http://english.yonhapnews.co.kr/national/2010/07/13/14/0301000000AEN20100713007800315F.HTML> .

[51] Ibid; Jon Rabiroff, ‘Machine gun-toting robots deployed on DMZ’, Stars and Stripes (online) 12 July 2010, <http://www.stripes.com/news/pacific/korea/machine-gun-toting-robots-deployed-on-dmz-1.110809> .

[52] Sofge, above n 36.

[53] Unmanned Aircraft Systems Flight Plan 2009-2047, above n 21, 50.

[54] Featherstone, above n 3.

[55] DOD Roadmap 2009, above n 2, 10.

[56] GNIUS Unmanned Ground Systems, Guardian UGV, described at <http://www.g-nius.co.il/unmanned-ground-systems/guardium-ugv.html> and

<http://www.defenseupdate.com/products/g/guardium.htm> .

[57] National Science Foundation, National Robotics Initiative,

<http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=503641 & org=CISE>

[58] Stephanie Levy, Obama Announces Manufacturing Plan that Includes Unmanned Systems, Association for Unmanned Vehicle Systems International (24 June 2011) <http://www.auvsi.org/auvsi/news/#AMP> .

[59] See, Development and Utilization of Robotics and Unmanned Ground Vehicles, above n 24, 47 (describing research and development activities directed towards developing military capabilities for robotics and unmanned ground vehicles of United States’ allies).

[60] William Yong and Robert F Worth, ‘Iran’s President Unveils New Long-range Drone Aircraft’, New York Times (online), 22 August 2010

<http://www.nytimes.com/2010/08/23/world/middleeast/23iran.html> .

[61] Jeremy Page, ‘China’s New Drones Raise Eyebrows’, Wall Street Journal (18 November 2010). See also, ‘China building an army of unmanned military drones “to rival the US”’, MailOnline (online), 5 July 2011

<http://www.dailymail.co.uk/news/article-2011533/China-building-army-unmanned-military-drones-rival-U-S.html> .

[62] See ‘China develops military drones for Pakistan’ 7 July 2011, <http://www.china-defense-mashup.com/china-develops-military-drones-for-pakistan.html> . (‘”The United States doesn’t export many attack drones, so we’re taking advantage of that hole in the market,” said Zhang Qiaoliang, a representative of the Chengdu Aircraft Design and Research Institute, which manufactures many of the most advanced military aircraft for the People’s Liberation Army.’)

[63] Noah Shachtman, ‘Iraq Militants Brag: We’ve Got Robotic Weapons, Too’, Wired, 4 October 2011, <http://www.wired.com/dangerroom/2011/10/militants-got-robots/> .

[64] For more discussion of these arguments, see Ronald Arkin, Governing Lethal Behaviour in Autonomous Robots (CRC Press, 2009); Lin, Bekey and Abney, above n 4.

[65] Development and Utilization of Robotics and Unmanned Ground Vehicles, above n 24, 9; See also DOD Roadmap 2009, above n 2, 10.

[66] Ronald C Arkin, ‘Ethical Robots in Warfare’, (2009) 28(1) Technology and Society Magazine 30, 32, doi: 10.1109/MTS.2009.931858.

[67] Anderson, above n 5, 345.

[68] Ibid 346.

[69] For more discussion of these arguments, see eg, Asaro, above n 4; Borenstein, above n 4; Sharkey, ‘Automated Killers and the Computing Profession’, above n 4; Sharkey, ‘Death Strikes from the Sky: The Calculus of Proportionality’, above n 4; Sparrow, ‘Robotic Weapons and the Future of War’, above n 4; Sparrow, ‘Predators or Plowshares?’, above n 4.

[70] See text accompanying notes 18-21 above.

[71] As pointed out by the UK Ministry of Defence, ‘The rapid, at times almost chaotic, development of UAS [unmanned aircraft systems] over the last 10 years has led to a range of terminology appearing in both military and civilian environments. As a result, some legacy terminology has become obsolete, while differing national viewpoints have made it difficult to achieve standardisation on new terms. ... Similarly, unmanned aircraft (UA)-related concepts such as autonomous and automated suffer from widely differing definitions, even within the United Kingdom. ... All of these areas have the potential to cause confusion or misunderstanding when unmanned aircraft issues are discussed between military, industrial and academic audiences.’ See UK Joint Doctrine Note, above n 14, v; See also Office of the Under Secretary of Defense, Joint Robotics Program Master Plan FY2005: Out Front in Harm’s Way (Office of the Undersecretary of Defense, 2005) (‘Joint Robotics Program Master Plan FY2005’); Lin, Bekey and Abney, above n 4, 4-5; Singer, above n 3, 67 (defining ‘robot’).

[72] Compare, for example, definitions of ‘autonomous’, ‘semi-autonomous’ and ‘automation’ in Joint Robotics Program Master Plan FY2005, above n 71, and UK Joint Doctrine Note, above n 14.

[73] See, for example, Philip Alston, Civil and political rights, including the questions of disappearances and summary executions: Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, UN Doc E/CN.4/2005/7 (22 December 2004); Philip Alston, Interim report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, A/61/311 (5 September 2006); and Philip Alston, Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, UN Doc A/HRC/14/24/Add.6 (28 May 2010).

[74] Arkin, Wager and Duncan, above n 42; Ronald C Arkin, Patrick Ulam and Brittany Duncan, ‘An Ethical Governor for Constraining Lethal Action in an Autonomous System’ (GVU Technical Report GIT-GVU-09-02, GVU Center, Georgia Institute of Technology, 2009); See also R C Arkin, The Case for Ethical Autonomy in Unmanned Systems (2010) Georgia Institute of Technology

<http://www.cc.gatech.edu/ai/robot-lab/online-publications/Arkin_ethical_autonomous_systems_final.pdf> R C Arkin, Moral

Emotions for Robots (2011) Georgia Institute of Technology

<http://www.cc.gatech.edu/ai/robot-lab/online-publications/moral-final2.pdf> and R C Arkin, P Ulam, and A R Wagner, ‘Moral Decision-making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception’ (2012) 100(3) Proceedings of the IEEE 571, doi: 10.1109/JPROC.2011.2173265, also available at <http://www.cc.gatech.edu/ai/robot-lab/online-publications/IEEE-

ethicsv17.pdf>.

[75] For example, Peter Asaro, ‘Modeling the Moral User’ (2009) 28 IEEE Technology and Society 20, doi: 10.1109/MTS.2009.931863; Sharkey, ‘Death Strikes from the Sky: The Calculus of Proportionality’, above n 4; Sparrow, ‘Robotic Weapons and the Future of War’, above n 4.

[76] 2008 Harvard Session, above n 4, 8.

[77] Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts, opened for signature 12 December 1977, 1125 UNTS 3 (entered into force 7 December 1979) (‘Additional Protocol I’). For analyses of the requirements of Article 36 see Justin McClelland, ‘The Review of Weapons in accordance with Article 36 of Additional Protocol I’ (2003) 850 International Review of the Red Cross 397; and Isabelle Daoust, Robin Coupland and Rikke Ishoey, ‘New Wars, New Weapons? The Obligation of States to Assess the Legality of Means and Methods of Warfare’ (2002) 846 International Review of the Red Cross 345.

[78] 2008 Harvard Session, above n 4, 2.

[79] DOD Roadmap 2009, above n 2, 39-40 (‘The current commitment of combat forces has seen a number of unmanned systems fielded quickly without the establishment of the required reliability and maintainability infrastructure that normally would be established prior to and during the fielding of a system. This was justifiably done as a conscious decision to save Warfighter’s lives at the risk of reliability and maintainability issues with the equipment fielded.’)

[80] Unmanned Aircraft Systems Flight Plan 2009-2047, above n 21, 41.

[81] P W Singer, ‘Robots at War: The New Battlefield’, (Winter 2009) Wilson Quarterly; see also Jessie Y C Chen, et al, Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies (July 2006) United States Army Research Laboratory, ARL-TR-3834,

<http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2 & doc=GetTRDoc.pdf & AD=ADA451379> (discussing research findings on benefits and drawbacks of automation).

[82] Quote taken from James M McPherson, ‘Battle Cry of Freedom’ (1988) 551, cited in UK Joint Doctrine Note, above n 14, 5-9.

[83] See Philip Alston, Interim report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, UN Doc A/65/321 (23 August 2010). The report was formally presented to the General Assembly by my successor as Special Rapporteur, Professor Christof Heyns.

[84] The Canadian representative, Ms Boutin, asked ‘what actions States could take to counter concerns regarding the misuse of technology raised in the report’. UN GAOR, 65th sess, 27th mtg, 22 October 2010, UN Doc A/C.3/65/SR.27 (6 Dec 2010) 13-14 [13].

[85] Mr Kerschischnig, representing Liechtenstein, asked ‘what measures should be taken at an international and national level to address' the issue of human rights and robotic technologies. Ibid 4 [15].

[86] The Swiss representative, Mr Vigny, asked whether ‘any States that had met the obligation to determine whether their use would be prohibited by article 36 of Additional Protocol 1 of the Geneva Conventions or by any other international norm.’ He also asked the Special Rapporteur if he considered ‘it possible that a robot could be developed that would be a more ethical decision maker than a human’. Ibid 3 [8].

[87] Rather than addressing the substance of the report or the proposals it contained, Mr Baños, speaking on behalf of the United States of America, stated that ‘the Special Rapporteur had gone beyond his mandate in his comments on operations during armed conflict and many of the findings and conclusions in his final report seemed to be based on a fundamental confusion over the applicable framework or an imprecise reading of the substantive law, and had failed to take into consideration that the lawful use of force in armed conflict or in self-defence, in line with international humanitarian law, did not constitute an extrajudicial killing.’ Ibid 4 [14].

[88] Extrajudicial, summary or arbitrary executions, GA Res 65/208, UN GAOR, 65th sess, 71st plen mtg, Agenda item 68(b), UN Doc A/RES/65/208 (21 December 2010).

[89] See Philip Alston, Jason Morgan-Foster and William Abresch, ‘The Competence of the UN Human Rights Council and its Special Procedures in relation to Armed Conflicts: Extrajudicial Executions in the “War on Terror”’ (2008) 19 European Journal of International Law 183.

[90] See Al-Skeini & Ors v The United Kingdom (European Court of Human Rights, Grand Chamber, Application no 55721/07, 7 July 2011).

[91] For an indication of the relatively narrow focus of the arms program at HRW, see Human Rights Watch, Arms (2012) <http://www.hrw.org/category/topic/arms> . Indeed, the single reference to robots on the entire website of HRW comes in a report on political prisoners in Burma, in which one of them comments that ‘[w]e eat and sleep like robots.’ Human Rights Watch, Burma’s Forgotten Prisoners (September 2009) 11,

<http://www.hrw.org/sites/default/files/reports/burma0909_brochure_web.pdf> .

[92] International Committee for Robot Arms Control, Mission Statement (2011) <http://www.icrac.co.uk/mission.html> .

[93] The statement of the 2010 Expert Workshop on Limiting Armed Tele-Operated and Autonomous Systems, Berlin, 22 September 2010

<http://www.icrac.co.uk/Expert%20Workshop%20Statement.pdf> .

[94] Jakob Kellenberger, ‘International Humanitarian Law and New Weapon Technologies’ (Keynote address delivered at the 34th Round Table on Current Issues of International Humanitarian Law, San Remo, 8-10 September 2011) <http://www.icrc.org/eng/resources/documents/statement/new-weapon-technologies-statement-2011-09-08.htm> .

[95] Ibid.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/JlLawInfoSci/2012/3.html