AustLII Home | Databases | WorldLII | Search | Feedback

Journal of Law, Information and Science

Journal of Law, Information and Science (JLIS)
You are here:  AustLII >> Databases >> Journal of Law, Information and Science >> 2012 >> [2012] JlLawInfoSci 4

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Krishnan, Armin --- "UVs, Network-centric Operations, and the Challenge for Arms Control" [2012] JlLawInfoSci 4; (2012) 21(2) Journal of Law, Information and Science 61

UVs, Network-centric Operations, and the Challenge for Arms Control



The development of unmanned aerial vehicles (UAVs) and other unmanned vehicles (UVs) goes back at least a hundred years. During the Cold War they were used primarily for reconnaissance missions over heavily defended territories. In more recent times modern armed forces have begun to arm UVs and there is now the prospect that conventional wars (and not just nuclear wars) could be fought by remote control across continents. Defensive and reconnaissance roles for UVs seem to be far less ethically and legally problematic than the new offensive roles that UVs are gradually taking over. The growing autonomy of armed UVs is a particular concern as it raises the spectre of Terminators that can decide by themselves when to attack and what target to engage. As pointed out in the précis article, although we are still a long way from fully autonomous UVs, there are good reasons to assume that full autonomy for military UVs could materialise in the future because of rapid technological progress and because of considerations relating to military effectiveness. In the absence of relevant laws for restricting military UVs there could be serious negative consequences, such as increased dangers for civilians as a result of out of control robots, an increased propensity towards the use of force as a result of a combination of low political costs and a ‘Playstation’ mentality that makes killing too easy, and even the increased danger of accidental war triggered by automated defensive systems. At the same time, Brendan Gogarty and Meredith Hagger expressed hope that ‘[t]here is still a chance to at least shape the way UVs are used and how far they proliferate within militaries and beyond.’[1]

This commentary focuses on the problem of arms control for UVs. It will be argued that although international prohibition of autonomous UVs would be highly desirable, the challenges will be enormous because of the great complexity that comes with the integration of UVs into network-centric operations and the decreasing size of UVs. Network-centric operations make it difficult to determine the locus of a decision and the increasing miniaturization of UVs makes them very difficult to detect, both of which creates difficulties for future arms control. In order for a future arms control treaty on armed autonomous weapons to work there needs to be an effective treaty monitoring and verification mechanism. History has shown that arms control agreements that have no effective monitoring mechanisms are prone to be violated which raises doubts about the wisdom behind such arms control agreements.[2] A possible solution to the problem that is proposed in this comment is to restrict the acceptable roles and capabilities of UVs and to preventively prohibit UVs of a very small size regardless of whether they carry weapons or not.

1 UVs and Network-centric Warfare

In the mid 1990s, the US military developed the concept of a ‘system-of-systems’, which became eventually known as network-centric warfare (NCW).[3] The general idea was that all military units and command posts would form a network across which information could be shared freely, which would create a common operating picture and this would tremendously improve situational awareness.[4] Networked forces can move and react much faster and use tactics that are impossible for armed forces that are not networked. In NCW there are three main elements that are brought together in a single computer network: sensors, which provide intelligence, surveillance, and reconnaissance (ISR); decision-making, which results from the processing, analysis, and dissemination of this information; and finally shooters, which enable the precision engagement of identified targets.[5] There is the possibility of separating one or all of these elements, or to combine them in one neat package. Since NCW is driven by the aim of getting inside the enemy’s decision cycle by sensing, deciding and acting faster than the enemy, the combination of sensor and shooter is an obvious solution. For example, UAVs are considered to be central to the vision of NCW because they improve situational awareness, as they can provide real time information of the battlefield that can be distributed by a command post to units in the field. Drones used by the Israeli Defense Force shortened the ‘sensor-to-shooter’ cycle to one minute during the 2006 Lebanon War.[6] The tendency over the last 10 years with respect to drones has been to arm teleoperated reconnaissance drones with missiles so that they can immediately engage any ‘time sensitive target’ that they find. This is very important with respect to enemies such as insurgents and terrorists who can suddenly appear and disappear in urban environments.[7] However, this does not imply that the use of teleoperated drones armed with missiles would be an optimal approach and that the tactics would remain effective in the future.

There are indeed other ways of how to organise sensors, decision-making, and shooters within a network. One possibility is to make UVs completely autonomous so that they can respond to threats and engage targets at a speed far beyond human capability. Another possibility, which is currently more technologically viable, is to use unarmed autonomous UVs for target acquisition for any networked shooters that could be human operated, remotely operated, or even automated. The shooter receives the coordinates or other target data directly from the sensor and launches precision-guided munitions (PGMs).[8] As the attack of the target could be carried out with hypersonic PGMs or with directed energy weapons,[9] which could achieve near instant effects, it would not matter much that the actual shooter is distanced while the sensor that ‘acquires’ the target is close. The decision for attacking the target could come from anywhere in the network: it can come from a human operator in a location different from either the sensor or the shooter, it can come from an intelligent battle management system that controls other elements in the network, or it may even come from the UV that discovered the target itself. This means in a NCW environment, sensors that acquire targets for shooters de facto become part of a much larger integrated weapons system. A UV could continuously surveil an area, identify and track a target and call in an attack from a shooter in a distanced location and all of this could happen in an extremely short time frame.

Why is this problem? NCW blurs the lines between sensor and shooter, target acquisition and intelligence, or intelligence and operations.[10] Every element in the network becomes part of a gigantic weapons systems and the locus of any decision becomes extremely difficult to identify at least from an outside perspective. If the primary concern is that UVs could be armed and/or autonomous, NCW makes it futile to prevent the aforementioned negative effects that armed and autonomous UVs may have by simply outlawing armed autonomous UVs, since similar effects can be achieved with UVs that are unarmed and that have very limited autonomy. An unarmed UV can still function like an offensive weapon, if it is networked with a remote shooter and if it can guide PGMs to a target. Since NCW in principle allows both options: humans can take control of any robotic element in the network or they can delegate decisions to intelligent machines, it can become extremely difficult to determine whether a human made an attack decision or whether the decision was made by some intelligent software. For example, brilliant munitions like the Low Cost Autonomous Attack System (LOCAAS) include the option that a human operator can confirm targets, or if necessary can retarget or abort.[11] Since the ability of such systems for reliable target recognition is not yet good enough, a human would be able to make much better targeting decisions. In the long run however, a single human operator may control numerous UVs at a time, meaning that much of the actual targeting would be largely automated with minimal human supervisory control. The US Department of Defense’s Unmanned Aerial Systems Roadmap 2005-2030 states,

human oversight of a large number of UA operating in combat must be reduced to the minimum necessary to prosecute the information war. Automated target search and recognition will transfer initiative to the aircraft, and a robust, anti-jam communications network that protects against hostile reception of data is a crucial enabler of UA swarming.[12]

The report also points out that rules of engagement ‘may require the intervention of a human operator’.[13] However, looking at it from the outside it is impossible to know whether a human is in fact in the loop, as the complete automation of the whole process could require little more than a software switch.

2 Miniaturisation

Martin Libicki published one of the most original and imaginative studies in the entire Revolution in Military Affairs (RMA) debate in 1994, called The Mesh and the Net: Speculations on Armed Conflict in an Age of Free Silicon.[14] His argument was that on the future battlefield almost everything becomes visible through advanced sensors and that anything that is visible can be destroyed anywhere at any time. This means that large platforms are no longer survivable under these conditions. This results in the need for sensors and weapons that are small and can be deployed in large numbers. Battlefields could be littered with millions of microsensors and micro-weapons that form an intelligent network, which is extremely robust and resilient. Libicki calls this approach to warfare Fire Ant Warfare.[15]

The general perception is that robots or UVs would have to be very big or very sophisticated machines. This need not be the case: very large numbers of networked unsophisticated micromachines can be more dangerous and resilient than big platforms such as tanks. A single micromachine may have very limited capability in terms of payload and on-board intelligence, but collectively a swarm of such micromachines could be capable of autonomous intelligent behavior and be able to attack large weapons systems, equipment and human beings.[16] The micromachines could neutralise large machines by attacking critical components such as electronics or engines. Of course, human beings are much more vulnerable and even something of microscopic size such as a chemical or a biological agent can be lethal. Nanomedicine currently under development works on nanomachines that can enter a human body in order to repair cells and arteries. Obviously, the same technology can be used for causing fatal damage in a human body.[17]

How realistic is the vision of Fire Ant Warfare? Currently many micro-UVs have a size of model aircraft of about 0.5m to 1m, but autonomous insect-size UVs are already under development.[18] In fact, Sandia National Labs already proved, with their Miniature Autonomous Robotic Vehicle (MARV) in 1996, that an autonomous microrobot the size of a cubic inch is technologically feasible.[19] British Special Forces use micro-UAVs that are six inches in size to search houses, and in the future these micro-UAVs could carry explosives for attacking snipers.[20] Weaponised micro-UVs will be developed in the near future if they are not already being operationally tested.[21] The effective range of such micro-UVs would be very limited because of their size, but they could be delivered to distanced areas in many ways, including by larger UVs that release them within their effective range of operation. Nanotechnology may even enable machines of a molecular size, which could self-assemble into larger machines or objects and maybe even self-replicate.[22] However, at the moment even the theoretical possibility of self-replicating nanobots is contested and it may well be that this aspect of nanotechnology will remain science fiction.

The trend towards ever smaller and lighter UVs has been reinforced by the recent financial crisis, which contributed to the cancellation of many expensive defence acquisition projects such as the US Army’s Future Combat Systems program in 2009.[23] As early as 2005, a US Air Force study had suggested that ‘[c]urrent advances in miniaturization are giving small UAVs capabilities comparable to their larger cousins at significantly lower costs.’[24] More recently, a UK Ministry of Defence report entitled The UK Approach to Unmanned Aircraft Systems stressed the potential effectiveness of employing large numbers of low-cost simple and smaller UVs instead of smaller numbers of big UVs.[25] Smaller UVs are clearly the way forward.

Micro-UVs pose very serious challenges to arms control because they are so easy to hide. Unlike large weapons systems that can be monitored with remote sensors, small UVs will not be discoverable on a satellite image. They could be secretly deployed by manned aircraft or larger UVs for ISR or offensive purposes. A large-scale use of very small micro-UVs would most likely produce visible results and result in evidence against the country that deployed them on the battlefield, but research and design of micro-UVs and their possession may go completely unnoticed unless there was a fairly intrusive weapons inspections regime with on-site inspections.

3 Manhunting

Philip Alston, the UN Special Rapporteur on Extrajudicial Executions, has pointed out that the use of drones for targeted killings of terrorists has become a major tendency in the past decade because of advances in robotic technologies.[26] It is certainly no coincidence that targeted killing has, up to now, been the primary role of armed UVs in ongoing armed conflicts. One reason for this are the technological limitations of current UVs – they are not yet a match for more sophisticated manned platforms, which currently severely limits their role in conventional warfare. Another reason is the changing nature of war. More traditional interstate conflicts have become very rare and some would say they are on the verge of extinction, while unconventional conflicts now very much characterise contemporary warfare.[27] The new enemies are terrorists, insurgents, and criminals, who have, apart from their own lives, very little to offer in terms of infrastructure and equipment that can be attacked.[28] The hope is that by identifying and neutralising key individuals, the enemy and their operations can be disrupted and eventually defeated by depriving them of their most skilled and experienced leaders and operatives.[29] At least for now, manhunting, the search for and neutralisation (kill or capture) of specific individuals, represents the current reality and possibly the future of warfare.[30] UVs play an important role in manhunting for three main reasons: they provide continuous surveillance; they enable precision attacks in terrain that is difficult to access; and they minimise risk to military personnel.

There are many legal issues with respect to the practice of targeted killing regardless of whether it is carried out by Special Forces, a sniper, or an armed UV and these have already been thoroughly analysed by legal experts like Nils Melzer, Avery Plaw, and Kenneth Anderson.[31] This is not the place to rehash these well-developed arguments. However, there are some other concerns about targeted killing that are more specific to the use of armed drones, as highlighted by Philip Alston in his report.[32] Alston identifies two specific concerns with respect to the use of armed drones in armed conflict, which are autonomy and accountability. Autonomous armed UVs could endanger innocent civilians as they may not be able to comply with international humanitarian law because of technological limitations, e.g. with respect to automated target recognition (ATR). Although this concern is generally valid, it seems very unlikely that the question of the autonomy of UVs would even be relevant in the context of targeted killing. After all, the intention is to kill very specific individuals and there is no pressing operational need for fully automating this process. Automation makes sense if the number of targets is large or if very high speed of action is required. For example, a high degree of automation is crucial for defeating another automated system such as an air defence system, as an automated system could respond extremely fast, but it is unnecessary for attacking a human being who may not even be aware that they are being targeted and who will in any case not be able to respond significantly faster than a human drone operator.

However, another aspect of autonomy that goes beyond target selection is allowing a UV to autonomously search for, pursue, and kill a pre-selected target. This would indeed be the Terminator scenario: one could program a drone to find and kill one particular individual and the drone could use biometrics or similar methods to reliably identify and kill the individual without causing harm to anybody else. If this could be done covertly and without leaving many traces, it would create a nightmare world in which anybody could be quietly eliminated from distance. This scenario would greatly amplify the accountability issue which already exists with respect to the lack of transparency surrounding CIA drone strikes in Pakistan.[33] Although one can clearly question the legality of these drone strikes, especially with respect to the issue of collateral damage and the proportionality of the use of force,[34] there is at least some public accountability. The current drone strikes are visible to the public, exactly because they cause significant collateral damage and are therefore regularly reported by the press. But once drones and the weapons they carry become smaller and more precise thereby radically reducing collateral damage,[35] there may not be any other visible effect than the sudden death of an alleged terrorist. If targeted killing could be carried out with zero collateral damage it would immediately quiet criticism and targeted killing could become even more used as a counterterrorism tactics. At this point military drone strikes would also be indistinguishable from assassination and this is in itself problematic, as it implies killing in a perfidious manner. International humanitarian law explicitly prohibits perfidy[36] and the assassination of specific members of the enemy is contrary to customary law as it is considered dishonourable.[37] The use of letter bombs and poisons could be much more discriminate methods for attacking the enemy, but because of their perfidious and dishonourable nature these practices are prohibited.[38] I would argue that weaponised micro UVs are no less perfidious and dishonourable than letter bombs and poisons, especially if they enable the killing of specific individuals from great distance and with zero risk. The method of killing affects the morality and legality of a targeted killing and this does not necessarily concern the issue of collateral damage and endangering innocent bystanders.

4 Proposed Solutions

The wider issues connected to the emergence of network-centric operations are very difficult to address legally. When it comes to developing strategies for the effective international arms control of UVs, the challenges may even be insurmountable. The technology for UAVs is already easily accessible to modern armed forces. Moreover, capabilities of UVs will improve over time as a result of making them part of an integrated military network into which various computerised systems and weapons can be plugged in as needed. Thus it may not make much sense to look at a UV as a single weapons system that has a clear and identifiable capability. In the coming age of NCW it is not only important to understand one platform or system as single item, but more importantly to understand the largely enhanced capability that comes with the platform or system forming part of a larger network. This insight is highly relevant for arms control in an age of network-centric operations. Therefore, it may not be an effective solution to seek outlawing armed autonomous UVs as suggested by arms control expert Jürgen Altmann.[39] From an arms control perspective it would be very difficult to determine from the outside whether an UV is autonomous or teleoperated. It may even be possible to convert a teleoperated UV into an autonomous UV without any visible change on the outside by upgrading the control software. Weapons inspectors would need to have access to the control software to ascertain the true nature of the UV and no nation would grant access to the inner workings of their weapons systems because this would potentially expose secret military technology. So it might be necessary to prohibit all types of armed UVs in order to ensure effective monitoring of a future arms control treaty on UVs. This would be politically difficult to achieve since several types of armed UVs have already entered service in countries like the US, Britain, Israel, and Russia, while many other countries are developing armed UVs. Existing armed UVs would need to be disarmed and research and design in this field would need to be discontinued, which does not seem very likely.

What can be done is to seek limitations on the overall capabilities of armed UVs in terms of their range, payload, and endurance in order to reduce dangers to civilians and to have more crisis stability.[40] The maximum range of armed UVs should not exceed several hundred kilometres, they should not carry large amounts of munitions that would enable them to attack numerous targets in a single mission, and their endurance should not vastly exceed the endurance of manned vehicles. These limitations would also reduce the overall damage that could be caused in a future ‘robot war’ that could involve large numbers of UVs and stand-off precision guided munitions. Armed UVs should have some neutralisation mechanism that disables their weapons after a certain amount of time if the UV loses contact to its command post. Similarly, a UV’s weapons should automatically be disabled if it leaves the assigned ‘kill box’ or narrow segment of the battlespace for which the use of armed UVs has been authorised. Such limitations would be much more acceptable to nations that currently have or are developing armed UVs since the technology for offensive autonomous UVs is still far off and their development is still too expensive.

Considering the possibility that unarmed UVs are used for target acquisition and are directly networked to shooters in remote locations, so that they effectively form part of a larger weapons system, it would be very important to consider the particular missions or roles of UVs in such circumstances. Some roles and missions are less problematic than others. Defensive roles such as air and missile defence would be a legitimate mission for highly automated systems and indeed such automated defensive systems have already been deployed.[41] The surveillance and defence of borders and certain sensitive sites by UVs can theoretically trigger an accidental war, but so can any border incidents involving human border guards. If the capabilities of such UVs and fixed sentry systems were adequately limited, the severity of incidents would be also limited, as well as the danger of accidental war. Some offensive roles for UVs are also less problematic, such as pre-programmed strike missions against targets that have been selected in advance by human military planners, as such missions are already carried out by unmanned systems (ballistic and cruise missiles). There is no good reason for prohibiting such a use of UVs unless one would want to prohibit ballistic and cruise missiles as well. The suppression of enemy air defences is another role that autonomous UVs are destined to take over as such missions becomes too dangerous for manned aircraft. This would require the UV to have the capability to dynamically respond to suddenly appearing surface-to-air threats and maybe even aerial threats. As speed is critical in such missions, an UV would have to be able to respond on its own without having to wait for a human operator to determine whether the response is appropriate. However, loitering missions where UVs could engage any targets of opportunity should be restricted to smaller segments of the battlespace where all targets are likely to be legitimate targets because the area has been cleared of civilians or there is no presence of civilians.

The miniaturisation of UVs is a matter of great concern in terms of arms control, certain missions, and the long-term risk that micro-UVs (and potentially nanobots) could represent to society. Most problematic is the possibility of using insect-size (or smaller) micro-UVs for hunting and eliminating specific individuals. It has been reported in the press that Israel intends to develop a ‘bionic hornet’, which can seek out and kill terrorists.[42] This technology may only materialise in ten or 20 years but the prospect of this development is troubling in many respects. It is a perfidious way of killing similar to the use of poisons. A robotic assassination device such as a weapon-carrying a micro-UV goes against the prohibition of perfidy and should clearly be outlawed. Since a micro-UV could affect the killing of an individual by clandestinely following them and by guiding precision weapons to them, it may matter little whether such a micro-UV carries a weapons payload itself. The possibility of a future capability to clandestinely assassinate people over large distances creates much more serious challenges in terms of transparency and accountability than the current practice of missile strikes from drones. It could be difficult to identify whether an attack has occurred or who was responsible for the attack. The best arms control strategy would be to prohibit all micro-UVs of a certain size.[43] This, of course, raises the question of how such a prohibition could be monitored. Some have suggested that nanotechnology (NT) should be leveraged for arms control or that NT can provide effective countermeasures to new NT-enabled threats.[44] For example, micro- and nano-sensors can also be used for detecting micro-UVs and nanobots. So there are long-term dangers as well as opportunities with respect to weapons miniaturisation and NT in the field of arms control.


There is still great uncertainty about the future of UVs, network-centric warfare, and the future availability of emerging technologies such as NT and artificial intelligence. It has been pointed out that there is not much experience with network-centric operations and that it is unclear how great the vulnerabilities are of a ‘system-of-systems’ to electromagnetic pulse, high-powered microwaves, computer network attacks, and space warfare. Similarly, autonomous UVs are still at an experimental stage and without the capability of autonomous operation UVs could very well turn out to be a technological dead end once the states that currently operate armed UVs have to face much more technologically sophisticated enemies. For this reason it may be too early to say whether the American vision of warfare, which is largely characterised by network-centrism and robotics, will turn out to be accurate. However, current trends indicate that many modern armed forces are becoming increasingly networked and that new weapons systems and sensors are being designed to easily plug into larger networks with which they can share information and that can also control them. This is particularly important for UVs, especially since they will lack the ability to successfully operate entirely on their own for the foreseeable future. While larger size offers some advantages like greater speed, range, and firepower, the trend in UVs is clearly in the direction of the development of smaller systems that are cheaper, stealthier, and more resilient. Smaller UVs also make more sense for military operations against enemies that do not have any heavy equipment or large infrastructure that could be attacked. Accordingly, there is a desire to have micro-UVs that could be dispersed in large numbers over a city and that could autonomously search for and possibly neutralise specific individuals without causing collateral damage. As long as manhunting remains an important objective and strategy for the US military in fighting their current wars, UVs and other robotic weapons will remain an attractive option for hunting and attacking individuals with high precision. For the sake of transparency and accountability and because of the generally perfidious nature of attacks on individuals with future robotic assassination devices, it will be imperative to prohibit this practice under international law. At the very least there should be legal limitations on the acceptable capabilities of armed UVs and the circumstances under which they can be used, as well as robust mechanisms that ensure a sufficient level of transparency and accountability with respect to the use of lethal force.

[*] Visiting Assistant Professor for Security Studies, Intelligence and National Security Studies Program, University of Texas at El Paso, Texas. I would like to thank Robert Sparrow for suggesting me as a commentator in this issue of the Journal of Law, Information and Science and Brendan Gogarty for inviting me to comment on the précis article.

[1] Brendan Gogarty and Meredith Hagger, ‘The Laws of Man over Vehicles Unmanned: The Legal Response to Robotic Revolution on Sea, Land and Air’ (2008) 19 Journal of Law, Information and Science 73, 144.

[2] One important example is the biological weapons program of the Soviet Union, which had been revealed by the defector Ken Alibek in 1992. He indicated that the Soviet Union had violated the Biological and Toxin Weapons Convention of 1972 on a truly massive scale. See Ken Alibek and Stephen Handelman, Biohazard (Random House, 2000).

[3] William A Owens, ‘The Emerging U.S. System-of-Systems’ (February 1996) Strategic Forum 63.

[4] Clay Wilson, ‘Network Centric Operations: Background and Oversight Issues for Congress’ (Congressional Research Service, 2007) 2-3.

[5] Arthur Cebrowski and John Garstka speak of three grids that make up network centric operational architectures, which they call information grid, sensor grid, and transaction grid. Arthur Cebrowski and John Garstka, ‘Network-centric Warfare: Its Origin and Future’ (1998) 124(1) Proceedings of the Naval Institute. The Department of Defense report on Network Centric Warfare similarly speaks of three domains of warfare (physical, information, and cognitive). See Department of Defense, ‘Network Centric Warfare: Department of Defense Report to Congress’

(July 2001) 3.7-3.9 <> (accessed 25 April 2011).

[6] Barbara Opall-Rome, ‘Sensor to Shooter in 1 Minute: Inside the Israeli Air Campaign Against Hezbollah Targets’ (2006) 21(38) Defense News, 1.

[7] Gogarty and Hagger, above n 1, 81.

[8] The US was developing an autonomous missile system called Non-Line-Of-Sight Launch System or NETFIRES as part of the cancelled Future Combat Systems program. It was designed to be air dropped and then to be remote controlled by a network from which it would have received its targeting data. Although this particular program seems to be dead the concept appears to be still viable.

[9] This could include electromagnetic ‘railguns’, high-powered lasers, and even space-based weapons like the ‘rods from god’ high-precision kinetic energy projectiles. They could hit a target over hundreds of kilometres away within seconds. Such weapons are still on the drawing board, but they could be a reality within 10 years.

[10] John Ferris, ‘Networkcentric Warfare, C4ISR and Information Operations: Towards a Revolution in Military Intelligence’ (2004) 19(2) Intelligence and National Security 204; Elizabeth Stanley-Mitchell, ‘Technology’s Double-Edged Sword: The Case of U.S. Army Battlefield Digitization’ (2001) 17(3) Defense & Security Analysis 271.

[11] Ronald C Arkin, Governing Lethal Behavior in Autonomous Robots (CRC Press, 2009) 24-25.

[12] US Department of Defense, Office of the Secretary of Defense, Unmanned Aircraft Systems Roadmap 2005-2030 (2005) B-9.

[13] Ibid A-4.

[14] Martin Libicki, ‘The Mesh and the Net: Speculations on Armed Conflict in an Age of Free Silicon’ (McNair Paper No 28, National Defense University, Washington DC, 1994).

[15] Ibid 28.

[16] Steven Metz, ‘Armed Conflict in the 21st Century: The Information Revolution and Post-Modern Warfare’ (US Army War College Institute of Strategic Studies, 2001) 67-71; Jürgen Altmann, ‘Military Nanotechnology: Perspectives and Concerns’ (2004) 62(35) Security Dialogue 68.

[17] Ajay Lele, ‘Role of Nanotechnology in Defence’ (2009) 33(2) Strategic Analysis 237.

[18] The US DoD Unmanned Systems Roadmap claims ‘Microelectromechanical systems (MEMS) offer the prospect of radically reducing the size of all modalities of unmanned systems. Fingernail-size turbines and pinhead-size actuators on future, miniature aircraft could make today’s MAV prototypes appear unnecessarily large and bulky. MEMS-enabled UGVs could be deposited like unnoticed insects.’ US Department of Defense, Office of the Secretary of Defense, Unmanned Systems Roadmap 2007-2032 (2007) 45.

[19] Compare Miniature Autonomous Robotic Vehicle (2003) Sandia National Labs <> (accessed 20 April 2011).

[20] David Hambling, Military Builds Robotic Insects (2007) Wired Danger Room Blog <> (accessed 20 April 2011).

[21] The US Special Operations Command has an acquisition program for a Lethal Miniature Aerial Munition System, which according to Defense Update offers ‘the warfighter portable, non-line-of-sight precision strike capability against individual targets, ensuring high precision effect with a very low risk of collateral damage.’ US Air Force to Develop Micro-UAV Killer Drones for the Special Operations Command (2011) Defense Update

<> (accessed 5 May 2011).

[22] J Storrs Hall calls the concept of nanobots that can self-assemble into any object ‘utility fog’: objects could appear or disappear like a hologram. J Storrs Hall, Nanofuture: What Is Next for Nanotechnology? (Prometheus Books, 2005) 188-193.

[23] John Markoff, ‘War Machines: Recruiting Robots for Combat’ (27 November 2010) New York Times (online)

< & sq=john%20markoff & st=cse> (accessed 28 April 2011).

[24] James Abatti, ‘Small Power: The Role of Small and Micro-UAVs in the Future’ (2005) Air War College 172.

[25] UK Ministry of Defence, The UK Approach to Unmanned Aircraft Systems (2011), 3-9 to 3-11.

[26] Philip Alston, Interim Report of the Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, 65th sess, UN Doc A/65/321 (23 August 2010).

[27] There are currently about ten ongoing armed conflicts and none of them are a conventional war in which regular forces fight each other.

[28] Michael Gross, Moral Dilemmas of Modern War: Torture, Assassination, and Blackmail in an Age of Asymmetric Conflict (Cambridge University Press, 2010) 112.

[29] Daniel Byman, ‘Do Target Killings Work?’ (2006) 85(2) Foreign Affairs 95, 103-105.

[30] George Crawford, Manhunting: Reversing the Polarity of Warfare (Publish America, 2008) 27-29.

[31] Nils Melzer, Targeted Killing in International Law (Oxford University Press, 2008); Avery Plaw, Targeting Terrorists: A License to Kill? (Ashgate, 2008); Kenneth Anderson, ‘Targeted Killing in U.S. Counterterrorism Strategy and Law’ (2009) Working Paper of the Series Counterterrorism and American Statutory Law.

[32] Alston, above n 26, 10-11.

[33] Gogarty and Hagger, above n 1, 102.

[34] Noel Sharkey, ‘Death Strikes From the Sky: The Calculus of Proportionality’ (2009) 28(1) IEEE Technology and Society Magazine 16, 17, doi: 10.1109/MTS.2009.931865.

[35] J Warrick and P Finn, ‘Amid Outrage Over Civilian Deaths in Pakistan, CIA Turns to Smaller Missiles’ (2010) Washington Post (online) 26 April 2010 <> .

[36] According to the 1949 Geneva Convention, Pr. I, Art. 37: ‘It is prohibited to kill, injure or capture an adversary by resort to perfidy. Acts inviting the confidence of an adversary to lead him to believe that he is entitled to, or is obliged to accord, protection under the rules of international law applicable in armed conflict, with intent to betray that confidence, shall constitute perfidy.’

[37] Leslie Green, The Contemporary Law of Armed Conflict (Manchester University Press, 2000) 144.

[38] The Hague Convention (1907) Section IV, Art. 22 and 23 prohibit the use of poisons. According to Larry May, this prohibition has little do with the concern about unnecessary suffering, but is because the use of poisons is dishonourable. See Larry May, War Crimes and Just War (Cambridge University Press, 2007) 137.

[39] Jürgen Altmann, ‘Preventive Arms Control for Uninhabited Military Vehicles’ in R Capurro and M Nagenborg (eds), Ethics and Robotics (IOS Press, 2009) 69.

[40] Robert Sparrow, ‘Predators or Plowshares? Arms Control for Robotic Weapons’ (2009) 28(1) IEEE Technology and Society Magazine 25, 27, doi: 10.1109/MTS.2009.931862.

[41] Gogarty and Hagger, above n 1, 139.

[42] David Crane, ‘Israel Developing “Bionic Hornet” to Target and Kill Enemy Combatants’ (2006) Defense Review <> (accessed 20 April 2011).

[43] Altmann, above n 16, 74.

[44] Storrs Hall, above n 22, 238.

AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback