AustLII Home | Databases | WorldLII | Search | Feedback

Journal of Law, Information and Science

Journal of Law, Information and Science (JLIS)
You are here:  AustLII >> Databases >> Journal of Law, Information and Science >> 2012 >> [2012] JlLawInfoSci 14

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Pagallo, Ugo --- "Guns, Ships, and Chauffeurs: The Civilian Use of UV Technology and its Impact on Legal Systems" [2012] JlLawInfoSci 14; (2012) 21(2) Journal of Law, Information and Science 224

Guns, Ships, and Chauffeurs: The Civilian Use of UV Technology and its Impact on Legal Systems



This article focuses on the civilian use of unmanned vehicle (UV) technology and its impact on legal systems. By distinguishing a panoply of UV applications such as unmanned aerial vehicles for policing and border security, unmanned water-surface and underwater vehicles for remote exploration works and repair, down to unmanned ground vehicles for urban transport, the aim is to pinpoint new legal cases affecting international humanitarian law, contractual obligations, strict-liability rules, etc. Special attention is paid to new models of limited responsibility and insurance for distributing risk in extra-contractual relations and in connection with tortious claims. All in all, we should strike a fair balance between people’s claim not to be ruined by the civilian use (rather than, say, the production and design) of UV technology, and the interest of UV counter-parties to safely interact with a new generation of autonomous ‘guns,’ ‘smart ships,’ and AI ‘chauffeurs.’

1 Introduction

In their work The Laws of Man over Vehicles Unmanned,[1] Brendan Gogarty and Meredith Hagger examine how the civilian use of Unmanned Vehicle (UV) technology may affect legal frameworks.[2] Although ‘the technology is currently less prominent in the civilian’ than the military sector, two exceptions already exist: agricultural UVs and unmanned vehicles employed in undersea operations (UUVs). Moreover, a number of factors such as inter-agency transfers, increasing international demand, public R&D support and growing access to powerful software and hardware, explain why ‘drone technology’ is rapidly and progressively ‘within the reach of public bodies, private companies and even individuals’.[3] This is the case with several UV applications for border security, policing, patrolling and inspection, emergency and hazard management, remote exploration works and repair, urban transport and more. The ‘relative cost savings’ promised by UV technology have indeed ‘excited many commercial operators’,[4] so that it is crucial that lawyers assess the regulatory constraints for the ever-growing production and use of this new generation of UVs.

Gogarty and Hagger properly pay attention to current principles and rules of the international civil aviation and maritime law as well as norms on civilian motor traffic: on one hand, civil authorities around the world have been reluctant to allow UVs to legally share the air, water or ground, with commercial traffic. On the other hand, it is likely that advancement in UV technology will increasingly compel law-makers to amend relevant provisions of the civilian traffic safety-regime. Even if ‘one would expect that the right body to make such value judgements would be a sovereign legislative body, not a software engineer’,[5] lawyers should be ready to tackle the pressure of an ‘avalanche of demand’ for regulatory review. Moreover, the intersection of UVs with legal matters of tort (in particular negligence) and the question of fault, issues of privacy and whether UVs may use force against humans will require careful consideration. Specifically, Gogarty and Hagger argue that ‘drone technology is still likely to create some real challenges to those charged with determining liability in tortious claims’.[6] In order to cast further light on current legal loopholes brought on by the civilian use of UV technology, this paper focuses on three cases.

In Part 2, I consider the use of unmanned aerial vehicles (UAVs) for border security, policing, patrolling and inspection with a focus on UAV ‘guns.’ In Part 3, I examine cases of UUVs and unmanned water-surface vehicles for emergency and hazard management, remote exploration works and repair at sea, ie ‘ships’. Finally, in Part 4, I take into account legal problems concerning the use of unmanned ground vehicle (UGVs) such as ‘chauffeurs.’ The unifying theme relevant to all UVs under consideration in this comment is the way they are affecting (or will affect) aspects of the civilian traffic safety-regime at national and international levels. Furthermore, this new generation of UVs will also impact on key legal notions of international humanitarian law, contractual obligations, and matters of strict-liability. At the end of the day, I think Gogarty and Hagger are right when claiming that the civilian use of UV technology ‘challenges the boundaries and efficacy of existing legal frameworks and raises a range of social and ethical concerns’.[7] Let me examine such challenges in the light of the aforementioned metaphors: those of guns, ships, and chauffeurs.

2 Guns

The first example of the use of UV technology under examination in this comment is provided by UAV applications for border security, policing, patrolling and inspection. These applications raise a number of concerns about whether UVs may use force against humans, eg, fighting against illegal trans-border activities via ‘drone technology.’ Significantly, in their 2010 reports to the UN General Assembly, Philip Alston and Christof Heynes have stressed that current legal provisions are silent on two critical issues: (i) whether certain types of UAVs such as ‘autonomous weapons’ should be considered unlawful, and (ii) the set of parameters and conditions that should regulate the use of these machines. Whilst the analysis of the UN special rapporteurs is centered on how military robotics-technology affects current rules of international humanitarian law (IHL), the civilian use of UAVs ‘guns’ concerns rights and safeguards granted by both constitutional and human rights law.

In the case of the European human rights legal framework, which is based on the 1950 Convention for the Protection of Human Rights[8] and the decisions of the European Court on Human Rights (ECHR), it is more than likely that the civilian use of UAV ‘guns’ would be severely restricted by clauses such as Article 2 of the ECHR Convention on the legitimate ‘use of force’. As Philip Alston affirms in his 2010 Report to the UN General Assembly on extrajudicial, summary or arbitrary executions, ‘a missile fired from a drone is no different from any other commonly used weapon, including a gun fired by a soldier or a helicopter or gunship that fires missiles. The critical legal question is the same for each weapon: whether its specific use complies with IHL’[9] and, moreover, whether the civilian use of UV technology abides by the principles and provisions of the ECHR legal framework.[10]

Interestingly, some claim that it is feasible to program UAVs to respect these principles and provisions. According to Ronald Arkin, the aim of R&D in UAVs is to create ‘guns’ that comply with principles of conduct such as strict necessity and humanity, while avoiding psychological problems such as the ‘scenario fulfillment’.[11] However, crucial problems persist when embedding legal rules in ‘intelligent machines.’ The formalisation of the set of rules does not only have to do with top normative concepts such as notions of validity, obligation, prohibition, or permission. These rules present highly context-dependent normative concepts such as proportionality and discrimination in the use of force, that are ‘much more complex than Asimov’s laws,’ because provisions in human rights law and constitutional safeguards ‘leave much room for contradictory or vague imperatives, which may result in undesired and unexpected behavior in robots’.[12] In the field of military autonomous machines, it is acknowledged that ‘whether or not robotic weaponry will soon be able to surmount the technical challenge of this moral imperative (at least as well as human soldiers)’ ie, not to harm civilians, ‘remains unknown’.[13] In research sponsored by the US Navy, Lin, Bekey and Abney even admit that ‘we may paradoxically need to use the first deaths to determine the level of risk’.[14] On this basis, today’s state-of-the-art technology and current legal frameworks suggest we should discipline the civilian use of autonomous UAV ‘guns’ according to the principle of precaution.[15]

In the foreseeable future, UV technology will not be sufficiently developed so as to program machines to grasp what ‘is necessary in a democratic society in the interests of national security’.[16] A more urgent threat is presented by use of such machines regardless of previous testing: in this context, focus should be on the civil (rather than political and military) restraints for the legal use of UAV ‘guns.’ Although the deployment of drone technology would be severely restricted by constitutional and human rights law (eg, the aforementioned case law of the ECHR), it does not follow that a number of drone applications for border security, policing, patrolling, etc, are illegitimate. For example, consider the non-lethal engagement of suspects, arrests by drones, monitoring operations and UAVs specifically designed for non-military surveillance, eg, a tiny aerial device for patrolling atomic plants, banks, or even private villas. In most of these cases, the civilian use of UV technology puts forward problems of human responsibility concerning, say, contractual obligations, rather than the protection of human rights, constitutional principles or matters of military and criminal accountability. As a result, legal issues concerning UV technology will remain focused on the technical accuracy of the means, rather than the legitimacy of the goal to be achieved through such devices. Let me clarify the difference with the second example of UVs as ‘ships,’ that is, unmanned water-surface vehicles.

3 Ships

The second case related to the (legal) use of UV technology is offered by water-surface and UUV applications such as in remote exploration work and repairs of pipelines, oil rigs, and so on.[17] Among UV devices, this is one of the most developed fields: some have even spoken of the ‘golden age’ of UUV technology that ‘occurred more than a decade before the UAV revolution’.[18] Whilst development in UUVs and the increase of their use in the civil sector is likely to force us to rethink many aspects of today’s legal framework in maritime law (eg, the 1972 IMO COLREGs Convention),[19] a new generation of cases involving contractual obligations will emerge as well. Focus should be on whether, from a legal viewpoint, UUVs work within a given set of instructions, so as to achieve specific goals, eg, to undertake repairs to oil rigs in the Caribbean Sea. There are UUVs that ‘autonomously’ undertake such work by preventing damage, alerting controllers, and so on.

In cases concerning the production and use of such ‘ships’, legal issues will mostly be entwined with conditions, terms, and clauses that depend both on the voluntary agreement between private individuals that a court will enforce, and the commercial nature of the agreement. Hence, lawyers will have to reflect on whether software and hardware developers, manufacturers or system engineers should be held responsible for the design and production of UUVs: the first step is to ascertain the quality of the vehicle, that is, if it has been properly programmed and constructed in order to attain a given class of results.

As shown by the field of military robotics technology, problems of contractual responsibility are at stake any time producers deny liability.[20] In the case of UUVs, it is thus crucial to discern matters of contractual responsibility by distinguishing the goal a specific UUV is ‘autonomously’ pursuing. Although ‘determining fault in complex software and hardware is already difficult’,[21] complex software and hardware applications for some artificial agents, eg, the da Vinci surgical robot, may raise engineering problems that scholars routinely address as a part of their everyday work and development research. By restricting the range of possible uses of a given artifact (eg, controlling the settings in operating rooms as in the case of da Vinci surgical robots), we can determine the reliability of a new generation of autonomous machines such as today’s UV ‘ships’. On the basis of how likely it is for an event to occur, what consequences it entails and its costs, work on da Vinci robots[22] shows that only 9 out of 350 interventions (2.6%) could not be completed due to device malfunctions. Likewise, Andonian et al[23] claim that only 4.8% of the malfunctions that occurred in a New York urology institute from 2000 to 2007 were related to patient injury.

These statistics suggest a spectrum of possible civilian uses of UV technology. Contrary to contractual obligations for the design and production of UAV ‘guns’, it seems that UUV ‘ships’ that autonomously repair oil rigs in the ocean do not really affect today’s conceptual frameworks and legal systems. While the use of autonomous lethal weapons is currently challenging international humanitarian law,[24] the legitimacy of other automatised devices such as UV ‘ships’ can be grasped by lawyers using the same concepts developed for previous technological innovations, that is, in terms of the probability of events, their consequences and costs.

At the other end of the spectrum, however, all UV applications involve ‘third parties.’ Along with the civilian use of UAV ‘guns’, matters of extra-contractual responsibility and issues of faultless or strict-liability concern the use of autonomous UUVs as well, eg, fatalities on oil rigs due to malfunctions of ‘intelligent ships.’ From this perspective, even reliable applications of UV technology such as some types of UUVs may trigger a number of novel and tricky questions, when defining obligations between private persons imposed by the government so as to compensate damage done by wrongdoing. Besides the panoply of strict contractual obligations, there are questions of tortious liability: ‘how will fault be determined when a human and computer are sharing the reigns of a vehicle under traffic legislation? Indeed, who will be at fault if the vehicle has an accident when it is clear only the computer AI was in control?’[25]

By following the analysis of The Laws of Man over Vehicles Unmanned, let me examine the third case of UV technology, that is, UGVs. A new generation of artificial ‘chauffeurs’ shed further light on a novel aspect of legal responsibility for the production and use of UV technology.

4 Chauffeurs

From a legal point of view, unmanned ground vehicles offer some of the most challenging applications of UV technology. Whether or not future UGVs will need driving licenses, or special licenses, etc, UV ‘cars’ and ‘chauffeurs’ allow us to further understand new legal issues that are raised by the civilian use of UV ‘guns’ and ‘ships.’ To start with, the complexity of the environment designers and producers have to address increases the uncertainty and unpredictability of UGVs automatically driving on the freeways. As a matter of risk, these UGVs are more similar to UV ‘guns’ than ‘ships’ and may even require that we apply the principle of precaution.[26] However, contrary to the use of UV ‘guns’, risks for employing UV ‘cars’ mostly concern problems related to extra-contractual responsibility and the field of strict liability, rather than human rights law and constitutional safeguards. Whilst proponents of UV technology ask for ‘a major review and clarification of existing civilian traffic safety regimes, and even the creation of a specific regulatory system for UVs’,[27] it is important to distinguish three different kinds of liability, when designing, producing and employing UGV ‘chauffeurs’. Extra-contractual responsibility may in fact depend on intentional torts, negligence-related tortious liability or faultless liability (strict liability).

There is liability for an intentional tort when a person has voluntarily performed a wrongful action. Liability comes from a lack of due care when the ‘reasonable’ person fails to guard against ‘foreseeable’ harm. Strict liability is established (eg, liability for defective products), when there is no illicit or culpable behavior but, say, a lack of information about certain features of the artifact. In the case of UV ‘cars’ the difficult part of the legal framework consists in deciding where to cut the chain of responsibility and how to apportion liability in the case of contributory negligence. Significantly, some speak of a ‘failure of causation’ due to the impossibility of attributing responsibility on the grounds of ‘reasonable foreseeability’, since it would be hard to predict what types of harm may supervene.[28] A traditional option is to make insurance compulsory for UGVs as has been done in most legal systems with the former generation of ‘cars’.

Whether the insurance model illustrated by Curtis Karnow[29] or the authentication model of Andrew Katz[30] is adopted, the aim should be to avert the risk that people think twice before producing and using UGVs, because they would be liable regardless of their intent or fault. Thus, in order to prevent unwise strict-liability policies in the field, eg, UGV ‘chauffeurs’ as a new legal employee, we should aim to strike a fair balance between people’s desire not to be ruined by producing and using UV technology, and the interest of others to be protected when interacting or transacting with such UGV ‘cars.’ Work on multi agent-systems and distributed ethics, involving both human and artificial agents,[31] suggests law-makers should endorse forms of proto-limited responsibility such as the ‘peculium’ in ancient Roman law.[32]

Consider a form of distributed responsibility (both legal and moral) such as in UGV ‘car sharing’. By adopting insurance mechanisms and determining that the liability of designers, producers, and even users of this technology should be limited to the very value of the artifact’s ‘peculium’ — that is, in the terms of the Digest of Justinian, ‘the sum of money or property granted by the head of the household to a slave or son-in-power’ — the aim is to distribute risk and avert regulatory bases for strict liability, as occurs when employers are held responsible for any illegal action that their employees engage in whilst at work. An effective way to tackle tricky notions of ‘due care’, ‘reasonable person’, and ‘foreseeable harm’ for a ‘human and computer sharing the reigns of a vehicle under traffic legislation’,[33] is offered by the forms of limited responsibility and insurance set up by lawyers in ancient Roman law. As an alternative to strict-liability policies, granting a sum of money as the peculium for autonomous machines, eg, accountable AI car sharing ‘chauffeurs’, would solve a number of practical issues when acting and transacting with them, since the peculium guarantees that obligations would be met. However, even such solutions have their limits.[34]

Imagine a smart UGV car as an expert system that will gain knowledge and skills from its own ‘decisions’, while learning from the features of the environment and from the living beings who inhabit it. This ‘chauffeur’ will respond to stimuli by changing the values of its own properties and, what is more, it will modify these states without external stimuli, while improving the rules through which those very properties change. The result is that the same model of ‘car’ will behave quite differently after a few days or weeks, depending on the ways we treat our UGV machine, that is, my ‘chauffeur’. In the event the ‘car’ causes harm to someone, who is responsible?

Remarkably, in the field of robotics, some claim we should frame our (legal) relationship with a novel generation of autonomous machines such as UGV ‘chauffeurs’ as we do with animals rather than tin machines or smart fridges.[35] Regardless of whether they are natural, eg, animals, or artificial, eg, robots, we would be dealing with agents that are interactive, autonomous, and adaptable,[36] thereby forcing us to co-evolve. Although, from a legal viewpoint, parallelisms between animals and artificial agents may be misleading, I concede that some metaphorical convergences between animals, robots and — why not? — UGV ‘chauffeurs’, are fruitful. The parallelism between animals and artificial agents, on one hand, has its limits because, most of the time, legal systems hold people strictly liable for the behavior of their animals. As noted, this policy seems unwise for the development and further research in UGV technology. On the other hand, if we consider how legal systems provide for limits to such faultless liability, as typically happens to owners of animals when they prove that a fortuitous event occurred, the parallelism casts further light on the event of UGV ‘chauffeurs’ causing harm to someone on the highway. Should we evade responsibility when a fortuitous event happens? Or should we deny liability when the harm was inevitable?

5 Conclusion

The civilian use of UV technology is rapidly impacting on today’s legal framework. Three ideal-typical cases such as UAV applications for national security and police functions (ie, ‘guns’), UUVs and surface vehicles for rescue operations (ie, ‘ships’), and finally UGVs for smart ground transport (ie, ‘chauffeurs’), raise a new generation of problems regarding UVs such as conditions for employing autonomous weapons under international humanitarian law, amendments to current provisions of civilian traffic safety-regimes, different strict-liability policies and burdens of proof in order to establish responsibility before the law, etc. In dealing with the civilian use of UV technology, we should be mindful of the spectrum of its daily applications.

At one end of the spectrum, precaution should be required in accordance with today’s state-of-the-art technology and legal frameworks in both constitutional and human rights law.[37] The principle especially applies to a number of UAVs used for national security and police functions, besides those UGVs used in the field of public transport. However, at the other end of the spectrum, there is a class of UV applications where the ‘principle of openness’ should prevail over the principle of precaution, eg, UUV ‘ships’ for security and rescue operations. The burden of proof should in other words fall on those who want to prevent scientists and producers from taking action, because risks and unpredictability of these UV machines are already under reasonable control. A 2007 report claimed that out of 135 Predator missions, 50 were lost and 34 had serious accidents.[38] However, figures for several UV applications such as rescue operations, ground transportation and so forth, reveal that performance of UVs for these applications are closer to the performance statistics of the da Vinci surgical robot than they are to their Predator ‘cousin’ (otherwise, we would not have had the ‘golden age’ of UUVs in the 1990s).[39] Hence, I think Gogarty and Hagger are right, when they claim that UV technology will create an ‘avalanche of demand’ for regulatory review.[40]

More particularly, among the most challenging issues triggered by the civilian use of UV artifacts, I include ‘anthropological principles’ of civil aviation and maritime law as well as matters of extra-contractual responsibility and policies on strict-liability rules for ground regulation, eg, whether humans might evade responsibility when they prove that a fortuitous event occurred on the highway, or that they could not prevent the plan executed by the autonomous machine, etc. As stated above, we should strike a fair balance between the desire of producers and owners of UV artifacts not to be ruined by the ‘decisions’ of these machines, and the expectations of UV-counterparts (including other UVs) to safely interact with UAV police, UUV rescuers, and UGV smart cars.

Although a number of relevant questions are still open, the time is ripe for ‘value judgments’.[41] While strict liability rules seem necessary in the field of UAV ‘guns’, models of limited responsibility and insurance policies demonstrate that there is a wiser way to approach the civilian use of today’s UUV ‘ships’ and the new generation of extremely sophisticated UGV ‘cars’. The more they become ‘intelligent machines’ that interact with humans on a daily basis, the more new ways of distributing risk are needed through models of legal accountability that seem, as such, irreducible to an aggregation of human beings as the only relevant source of UV action, eg, the ‘peculium’ of AI drivers in the car sharing sector. By averting any legislation that might prevent the use of UVs due to their unpredictability and the consequent excessive burden on the owner of UVs (rather than, say, on the producers and designers), this twofold regime of liability shows a way to strike a fair balance between openness and precaution.

[*] Law School, University of Torino, via s. Ottavio 54, 10124 Turin, Italy

[1] Brendan Gogarty and Meredith Hagger, ‘The Laws of Man over Vehicles Unmanned: The Legal Response to Robotic Revolution on Sea, Land and Air’ (2008) 19 Journal of Law, Information and Science 73.

[2] Ibid 103-124.

[3] Ibid 105.

[4] Ibid 110.

[5] Ibid 121.

[6] Ibid 124.

[7] Ibid 73.

[8] Convention for the Protection of Human Rights and Fundamental Freedoms, opened for signature 4 November 1950, 213 UNTS 221 (entered into force 3 September 1953) (‘ECHR Convention’).

[9] Philip Alston, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, United Nations General Assembly, Human Rights Council, A/HRC/14/24/Add.6 (28 May 2010).

[10] For the restrictions set up by the ECHR jurisprudence see Gillow v UK (1986) 11 EHRR 335; Leander v Sweden [1987] ECHR 4; (1987) 9 EHRR 433; Klass et al v Germany (1978) 28 Eur Court HR (ser A) 17, etc.

[11] R C Arkin, ‘Governing Lethal Behaviour: Embedding Ethics in a Hybrid Deliberative/Hybrid Robot Architecture’ (Report No GIT-GVU-07-11, Georgia Institute of Technology’s GVU Centre, 2007).

[12] P Lin, G Bekey, and K Abney, ‘Autonomous Military Robotics: Risk, Ethics, and Design’ (Report, Ethics and Emerging Sciences Group, California Polytechnic State University, 20 December 2008).

[13] Ibid.

[14] Ibid.

[15] G Veruggio, ‘Euron Roboethics Roadmap’ (Proceedings of the Euron Roboethics Atelier, Genoa Italy, 27 February – 3 March 2006).

[16] ECHR Convention, opened for signature 4 November 1950, 213 UNTS 221 (entered into force 3 September 1953) art. 8 (on people’s privacy).

[17] Gogarty and Hagger, above n 1, 108-109.

[18] Ibid 104.

[19] Ibid 114.

[20] For instance, it was alleged (but ultimately shown to be unsubstantiated) that Foster-Miller Special Weapons Observation Reconnaissance Detection System (SWORDS) units employed by the US Army experienced unintended movements. The Inside Story of the SWORDS Armed Robot ‘Pullout’ in Iraq: Update (1 October 2009) Popular Mechanics

<> .

[21] Gogarty and Hagger, above n 1, 123.

[22] L S Borden, P K Kozlowski, C R Porter, and J M Corman, ‘Mechanical failure rate of da Vinci robot system’ (2007) 14(2) Canadian Journal of Urology 3499.

[23] S Andonian, Z Okeke, A Rastinehad, B A Vanderkrink and L Richstone, ‘Device failures associated with patient injuries during robot-assisted laparoscopic surgeries: a comprehensive review of FDA MAIUDE database’ (2008) 15(1) Canadian Journal of Urology 3912.

[24] Alston, above n 9.

[25] Gogarty and Hagger, above n 1, 120-121.

[26] Veruggio, above n 15.

[27] Gogarty and Hagger, above n 1, 121.

[28] C E A Karnow, ‘Liability for distributed artificial intelligence’ (1996) 11 Berkeley Technology and Law Journal 147.

[29] Ibid.

[30] A Katz, Intelligent agents and internet commerce in ancient Rome (2008) Society for Computers and Law <> accessed 15 August 2010.

[31] L Floridi, The ethics of distributed responsibility (forthcoming).

[32] Katz, above n 30; U Pagallo, ‘Robotrust and legal responsibility’ (2010) 23 Knowledge, Technology & Policy 367.

[33] Gogarty and Hagger, above n 1, 120.

[34] U Pagallo, ‘Killers, Fridges, and Slaves: A Legal Journey in Robotics’ (2011) AI & Society, DOI: 10.1007/s00146-010-0316-0.

[35] D McFarland, Guilty Robots, Happy Dogs: The Question of Alien Minds (Oxford University Press, 2008).

[36] C Allen, G Varner, and J Zinser, ‘Prolegomena to any future artificial moral agent’ (2000) 12 Journal of Experimental and Theoretical Artificial Intelligence 251.

[37] C E Foster, Science and the Precautionary Principle in International Courts and Tribunals (Cambridge University Press, 2011).

[38] N Stafford, ‘Spy in the sky’ (2007) 7130(445) Nature 808.

[39] Gogarty and Hagger, above n 1.

[40] Ibid.

[41] Ibid.

AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback