Journal of Law, Information and Science
Unmanned Vehicles: Subordination to Criminal Law under the Modern Concept of Criminal Liability
COMMENT BY GABRIEL HALLEVY[*]
This commentary is restricted to the question of criminal liability of the autonomous unmanned vehicle. Autonomous unmanned vehicles are based on artificial intelligence technology, and they function as artificial intelligence entities. If the use of these entities may result in tortious conduct, then their use may also be capable of resulting in criminal offenses. The ultimate legal question is, therefore, who is to be held responsible for offenses committed by unmanned vehicles based on artificial intelligence technology (hereinafter ‘AIUV’). Modern concepts of criminal liability suggest that criminal liability models are relevant to artificial intelligence entities. This commentary argues that these criminal liability models are relevant and available to AIUV.
In order to impose criminal liability upon a person, two main elements must exist. The first is the factual element (actus reus), while the other is the mental element (mens rea). The actus reus requirement is expressed mainly by acts or omissions. Sometimes, other external elements are required in addition to conduct, such as the specific results of that conduct and the specific circumstances underlying the conduct.
The mens rea requirement has various levels of mental elements. The highest level is expressed by knowledge, and sometimes that knowledge is accompanied by a requirement of intent or specific intention. Lower levels are expressed by negligence (a reasonable person should have known) or by strict liability offenses.
These are the only criteria or capabilities which are required in order to impose criminal liability, not only on humans, but on any other kind of entity, including corporations and AI entities. Any entity might possess further capabilities, such as creativity, for example. However, in order to impose criminal liability, the existence of actus reus and mens rea of the specific offense is adequate. No further capabilities are required for the imposition of criminal liability. These requirements may be fulfilled by AIUV through three possible models of liability: the ‘Perpetration-by-Another’ liability model; the ‘Natural-Probable-Consequence’ liability model; and the ‘Direct’ liability model. These models are discussed below.
The Perpetration-by-Another liability model does not consider the AIUV as possessing any human attributes. The AIUV is considered an innocent agent. Accordingly, under this legal viewpoint, a machine is a machine, and it is never human. However, one cannot ignore an AIUV’s capabilities. Pursuant to this model, these capabilities are insufficient to deem the AIUV a perpetrator of an offense. These capabilities resemble the parallel capabilities of a mentally limited person, such as a child, or of a person who is mentally incompetent or who lacks a criminal state of mind.
When an innocent agent physically commits an offense, the person who sent or activated the innocent agent is criminally responsible as a perpetrator-by-another. In such cases, the innocent agent is regarded as a mere instrument, albeit a sophisticated instrument, while the party orchestrating the offense (the perpetrator-by-another) is the real perpetrator as a principal in the first degree and is held accountable for the conduct of the innocent agent. The perpetrator-by-another’s liability is determined on the basis of the innocent agent’s conduct and the perpetrator-by-another’s own mental state.
In such situations, the derivative question relative to AIUV is: who is the perpetrator-by-another? There are two candidates: the first is the programmer of the AI software and the second is the user, or the end-user. A programmer of AI software might design a program in order to commit offenses via the AIUV. For example, a programmer designs software for an operating AIUV. The AIUV is intended to be placed on the road, and its software is designed to kill innocent people by running over them. The AIUV may commit the homicide, but the programmer is deemed the perpetrator.
The other person who might be considered the perpetrator-by-another is the user of the AIUV. The user did not program the software, but he or she uses the AIUV, including its software, for his or her own benefit. For example, a user purchases an AIUV, which is designed to execute any order given by its master. The specific user is identified by the AIUV as that master, and the master orders the AIUV to run over any trespasser on his or her farm. The AIUV executes the order exactly as ordered. This is no different than a person who orders their dog to attack a trespasser. The AIUV committed the assault, but the user is deemed the perpetrator.
In both scenarios, the actual offense was committed by the AIUV. The programmer or the user did not perform any action conforming to the definition of a specific offense; therefore, neither the programmer nor the user meets the actus reus requirement of the specific offense. The Perpetration-by-Another liability model considers the action committed by the AIUV as if it had been the programmer’s or the user’s action. The legal basis for that is the instrumental usage of the AIUV as an innocent agent. No mental attribute is required for the imposition of criminal liability which is attributed to the AIUV.
When programmers or users use an AIUV instrumentally, the commission of an offense by the AIUV is attributed to them. The mental element required in the specific offense already exists in their minds. The programmer had criminal intent when he or she ordered the commission of the offense, and the user had criminal intent when he or she ordered the commission of the offense, even though these offenses were actually committed through an AIUV. When an end-user makes instrumental usage of an innocent agent to commit a crime, the end-user is deemed the perpetrator.
This liability model does not attribute any mental capability, or any human mental capability, to the AIUV. According to this model, there is no legal difference between an AIUV and a screwdriver or an animal. When a burglar uses a screwdriver in order to open up a window, he or she uses the screwdriver instrumentally, and the screwdriver is not criminally responsible. The screwdriver’s ‘action’ is, in fact, the burglar’s. This is the same legal situation when using an animal instrumentally. An assault committed by a dog by order of its master is, in fact, an assault committed by the master.
This kind of legal model might be suitable for two types of scenarios. The first scenario is using an AIUV to commit an offense without using its advanced capabilities, which enable it to ‘think’ or to think (with no quotation marks, ie when an AIUV decides to commit an offense based on its own accumulated experience or knowledge). The second scenario is using a very old version of an AIUV, which lacks the modern advanced capabilities of the modern AIUV. In both scenarios, the use of the AIUV is instrumental usage. Still, it is usage of an AIUV, due to its ability to execute an order to commit an offense. A screwdriver cannot execute such an order; a dog can. A dog cannot execute complicated orders; an AIUV can.
The Perpetration-by-Another liability model is not suitable when an AIUV decides to commit an offense based on its own accumulated experience or knowledge. This model is not suitable when the software of the AIUV was not designed to commit the specific offense, but the offense was committed by the AIUV nonetheless. This model is also not suitable when the specific AIUV functions not as an innocent agent, but as a semi-innocent agent.
However, the Perpetration-by-Another liability model might be suitable when a programmer or user makes instrumental usage of an AIUV, but without using the AIUV’s advanced capabilities. The legal result of applying this model is that the programmer and the user are fully criminally responsible for the specific offense committed, while the AIUV has no criminal liability whatsoever.
The Natural-Probable-Consequence liability model assumes deep involvement of the programmers or users in the AIUV’s daily activities, but without any intention of committing any offense via the AIUV. For example, during the execution of its daily tasks, an AIUV commits an offense. The programmers or users had no knowledge of the offense until it had already been committed; they did not plan to commit any offense, and they did not participate in any part of the commission of that specific offense.
One example of such a scenario is an AIUV which is designed to function as an automatic driver together with a human driver. The AIUV is programmed to protect the mission as part of its duties to drive the vehicle. During the driving, the human driver activates the automatic driver, which is the AIUV, and the program is initialised. At some point after activation of the automatic driver, the human driver sees an approaching traffic jam and tries to abort the mission and return back. The AIUV deems the human driver’s action as a threat to the mission and takes action in order to eliminate that threat. As a result, the human driver is killed by the AIUV’s actions. Obviously, the programmer had not intended to kill anyone, especially not the human driver, but nonetheless, the human driver was killed as a result of the AIUV’s actions. Moreover, these actions were done according to the program.
In this example, the Perpetration-by-Another model is not legally suitable. The Perpetration-by-Another model assumes mens rea, the criminal intent of the programmers or users to commit an offense via the instrumental use of some of the AIUV’s capabilities. However, this is not the legal situation in this case. Rather, in this situation the programmers or users had no knowledge of the committed offense; they had not planned it, and had not intended to commit the offense using the AIUV. For such cases, the second model might create a suitable legal response. This model is based upon the ability of the programmers or users to foresee the potential commission of offenses.
According to the second model, a person might be held accountable for an offense, if that offense is a natural and probable consequence of that person’s conduct. Originally, the Natural-Probable-Consequence liability model was used to impose criminal liability upon accomplices, when one committed an offense, which had not been planned by all of them and which was not part of a conspiracy. The established rule prescribed by courts and commentators is that accomplice liability extends to acts of a perpetrator that were a ‘natural and probable consequence’ of a criminal scheme that the accomplice encouraged or aided. The Natural-Probable-Consequence liability has been widely accepted in accomplice liability statutes and recodifications.
The Natural-Probable-Consequence liability model seems to be legally suitable for situations in which an AIUV committed an offense, while the programmer or user had no knowledge of it, had not intended it, and had not participated in it. The Natural-Probable-Consequence liability model requires the programmer or user to be in nothing more than the mental state required for negligence. Programmers or users are not required to know about any forthcoming commission of an offense as a result of their activity, but are required to know that such an offense is a natural, probable consequence of their actions.
In a criminal context, a negligent person has no knowledge of the offense; however, a reasonable person in the same situation would have known about the offense, because it is a natural and probable consequence of the situation. The programmers or users of an AIUV, who should have known about the probability of the forthcoming commission of the specific offense, are criminally responsible for the specific offense, even though they did not actually know about it. This is the fundamental legal basis for criminal liability in negligence cases. Negligence is, in fact, an omission of awareness or knowledge. The negligent person omitted knowledge, not acts.
The Natural-Probable-Consequence liability model would permit liability to be predicated upon negligence, even when the specific offense requires a different state of mind. This has been accepted in the modern criminal law, and thus reduced significantly the mental element requirements in these situations, since the relevant accomplice did not really know about the offense, but a reasonable person could have predicted it. Negligence is suitable for this kind of situation. This is not valid in relation to the person who personally committed the offense, but rather, is considered valid in relation to the person who was not the actual perpetrator of the offense, but was one of its intellectual perpetrators. Reasonable programmers or users should have foreseen the offense, and prevented it from being committed by the AIUV.
However, the legal results of applying the Natural-Probable-Consequence liability model to the programmer or user differ in two different types of factual scenarios. The first type of scenario is when the programmers or users were negligent while programming or using the AIUV and had no criminal intent to commit any offense. The second type of scenario is when the programmers or users programmed or used the AIUV knowingly and willfully in order to commit one offense via the AIUV, but the AIUV deviated from the plan and committed some other offense, in addition to or instead of the planned offense.
The first scenario is a pure case of negligence. The programmers or users acted or omitted negligently; therefore, they should be held accountable for the offense of negligence, if there is such an offense in the relevant legal system. Thus, as in the above example, where a programmer of an automatic driver negligently programmed it to defend its mission with no restrictions on the taking of human life, the programmer is negligent and responsible for the homicide of the human driver. Consequently, if negligent homicide exists as a specific offense in the relevant legal system, negligent homicide is the most severe offense for which the programmer may be held accountable, as opposed to manslaughter or murder which requires at least knowledge or intent.
The second scenario resembles the basic idea of the Natural-Probable-Consequence liability model in accomplice liability cases. The dangerousness of the very association or conspiracy, the aim of which is to commit an offense, is the legal reason for more severe accountability to be imposed upon the co-conspirators. In such cases, the criminal negligence liability alone is insufficient. The social danger posed by such a situation far exceeds the situations that negligence was accepted as sufficient to be applied (through retribution, deterrence, rehabilitation and incapacitation).
As a result, according to the Natural-Probable-Consequence liability model, if the programmers or users knowingly and willfully used the AIUV to commit an offense and if the AIUV deviated from the plan by committing another offense, in addition to or instead of the planned offense, the programmers or users should be held accountable for the additional offense, as if it had been committed knowingly and willfully.
However, the question still remains: what is the criminal liability of the AIUV itself when the Natural-Probable-Consequence liability model is applied? In fact, there are two possible outcomes. If the AIUV acted as an innocent agent, without knowing anything about the criminal prohibition, it is not held criminally accountable for the offense it committed. Under such circumstances, the actions of the AIUV were not different from the actions of the AIUV under the first model (the Perpetration-by-Another liability model). However, if the AIUV did not act merely as an innocent agent, then, in addition to the criminal liability of the programmer or user pursuant to the Natural-Probable-Consequence liability model, the AIUV itself should be held criminally responsible for the specific offense directly. The direct liability model of AIUV is the third model, as described below.
The third model, the direct liability model, does not assume any dependence of the AIUV on a specific programmer or user; rather, it focuses on the AIUV itself. As discussed above, criminal liability for a specific offense is mainly comprised of the factual element (actus reus) and the mental element (mens rea) of that offense. Any person attributed with both elements of the specific offense is held criminally accountable for that specific offense. No other criteria are required in order to impose criminal liability. A person might possess further capabilities, but, in order to impose criminal liability, the existence of the factual element and the mental element required to impose liability for the specific offense is quite enough.
In order to impose criminal liability on any kind of entity, the existence of these elements in the specific entity must be proven. Generally, when it has been proven that a person committed the offense in question with knowledge or intent, that person is held criminally responsible for that offense. The criminal liability of AIUV depends upon the following questions: How can these entities fulfill the requirements of criminal liability? Do AIUVs differ from humans in this context?
An AI algorithm might have numerous features and qualifications far exceeding those of an average human, such as higher velocity of data processing (thinking), ability to take into consideration many more factors, etc. Nevertheless, such features or qualifications are not required in order to impose criminal liability. They do not negate criminal liability, but they are not required for the imposition of the criminal liability. When a human or corporation fulfills the requirements of both the factual element and the mental element, criminal liability is imposed. If an AIUV is capable of fulfilling the requirements of both the factual element and the mental element, and, in fact, it does fulfil them, then there is nothing to prevent criminal liability from being imposed on that AIUV.
Generally, the fulfillment of the factual element requirement of an offense is easily attributed to AIUV. As long as an AIUV controls a mechanical or other mechanism that moves its parts, any act might be considered as performed by the AIUV. Thus, when an AIUV activates its electric or other mechanic system and moves it, this might be considered an act, if the specific offense involves such an act. For example, in the specific offense of assault, such an electric or mechanic movement of an AIUV that hits a person standing nearby is considered to fulfill the actus reus requirement of the offense of assault.
The attribution of the mental element of offenses to AIUV is the real legal challenge in most cases. The attribution of the mental element differs from one AI technology to other. Most cognitive capabilities developed in modern AI technology, such as creativity, are immaterial to the question of the imposition of criminal liability. The only cognitive capability required for the imposition of criminal liability is embodied within the mental element requirement (mens rea). Creativity is a human feature that some animals also have, but creativity is a not a requirement for imposing criminal liability. Even the most uncreative persons may be held criminally responsible. The sole mental requirements needed in order to impose criminal liability are knowledge, intent, negligence, etc, as required by the specific offense and under the general theory of criminal law. As a result, AIUVs do not have to create the idea of committing the specific offense, but in order to be criminally responsible the have only to commit the specific offense with knowledge as to the factual elements of that offense.
Knowledge is defined as sensory reception of factual data and the understanding of that data. Most AI systems are well equipped for such reception. Sensory receptors of sights, voices, physical contact, touch, etc, are common in most AI systems. These receptors transfer the factual data received to central processing units that analyze the data. The process of analysis in AI systems parallels that of human understanding. The human brain understands the data received by eyes, ears, hands, etc, by analysing that data. Advanced AI algorithms are trying to imitate human cognitive processes. These processes are not so different.
Specific intent is the strongest of the mental element requirements. Specific intent is the existence of a purpose or an aim that a factual event will occur. The specific intent required to establish liability for murder is a purpose or an aim that a certain person will die. As a result of the existence of such intent, the perpetrator of the offense commits the offense; ie, he or she performs the factual element of the specific offense. This situation is not unique to humans. Some AIUVs might be programmed to figure out by themselves a purpose or an aim and to take actions in order to achieve that figured-out purpose, and some advanced AIUV may figure out by themselves a purpose and to take relevant actions in order to achieve that purpose. In both cases this might be considered as specific intent, since the AIUV figured out by itself the purpose and figured out by itself the relevant actions in order to achieve that purpose.
One might assert that many crimes are committed as a result of strong emotions or feelings that cannot be imitated by AI software, not even by the most advanced software. Such feelings are love, affection, hatred, jealousy, etc. This might be correct in relation to AI technology of the beginning of the twenty-first century. Even so, such feelings are rarely required in specific offenses. Most specific offenses are satisfied by knowledge of the existence of the external element. Few offenses require specific intent in addition to knowledge. Almost all other offenses are satisfied by much less than that (negligence, recklessness, strict liability). Perhaps in a very few specific offenses that do require certain feelings (eg, crimes of racism, hate), criminal liability cannot be imposed upon an AIUV, which have no such feelings, but in any other specific offense, lack of certain feelings is not a barrier to imposing criminal liability.
If a person fulfills the requirements of both the factual element and the mental element of a specific offense, then the person is held criminally responsible. If an AIUV fulfills all elements of an offense, there is no reason for it to be exempt from criminal liability. When an AIUV fulfils all elements of a specific offense, both factual and mental, there is no reason to prevent imposition of criminal liability upon it for that offense. The criminal liability of an AIUV does not replace the criminal liability of the programmers or the users, if criminal liability is imposed on the programmers and/or users by any other legal path. Criminal liability is not to be divided, but rather, added. The criminal liability of the AIUV is imposed in addition to the criminal liability of the human programmer or user.
The criminal liability of an AIUV is not dependent upon the criminal liability of the programmer or user of that AIUV. As a result, if the specific AIUV was programmed or used by another AIUV, the criminal liability of the programmed or used AIUV is not influenced by that fact. The programmed or used AIUV shall be held criminally accountable for the specific offense pursuant to the direct liability model, unless it was an innocent agent. In addition, the programmer or user of the AIUV shall be held criminally accountable for that very offense pursuant to one of the three liability models, according to its specific role in the offense. The chain of criminal liability might continue, if more parties are involved, whether they are human or an AIUV.
Not only may positive factual and mental elements be attributed to an AIUV, but also all relevant negative fault elements are attributable to AIUV. Most of these elements are expressed by the general defenses in criminal law; eg, self-defense, necessity, duress, intoxication, etc. For some of these defenses (justifications), there is no material difference between humans and AIUV, since they relate to a specific situation (in rem), regardless of the identity of the offender. For example, an AIUV serving under the local police force is given an order to block a suspect car, and unbeknownst to the AIUV, this order is illegal. If the executer is unaware, or could not reasonably become aware, that an otherwise legal action is illegal in this specific instance, the executer of the order is not criminally responsible. In that case, there is no difference whether the executer is human or an AIUV.
For other defenses (excuses and exemptions) some applications should be adjusted. For example, the intoxication defense is applied when the offender is under the physical influence of an intoxicating substance, eg, alcohol, drugs, etc. The influence of alcohol on an AIUV is minor, at most, but the influence of an electronic virus that is infecting the operating system of the AIUV might be considered parallel to the influence of intoxicating substances on humans. Some other factors might be considered as being parallel to insanity or loss of control.
It might be summed up that the criminal liability of an AIUV according to the direct liability model is not different from the relevant criminal liability of a human. In some cases, some adjustments are necessary, but substantively, it is the very same criminal liability, which is based upon the same elements and examined in the same ways.
[*] Ph.D., Associate Professor, Faculty of Law, Ono Academic College. I thank Dr Brendan Gogarty for inviting me to comment on the précis article, and Bruce Newey, the managing director of the Journal of Law, Information and Science, for the excellent editing.
 Brendan Gogarty and Meredith Hagger, ‘The Laws of Man over Vehicles Unmanned: The Legal Response to Robotic Revolution on Sea, Land and Air’ (2008) 19 Journal of Law, Information and Science 73, 122-124.
 AIUV – Artificial Intelligence Unmanned Vehicles.
 See eg Gabriel Hallevy, ‘Criminal Liability of Artificial Intelligence Entities – From Science Fiction to Legal Social Control’ (2010) 4 Akron Intellectual Property Journal 171; Gabriel Hallevy, ‘“I, Robot - I, Criminal” – When Science Fiction Becomes Reality: Legal Liability of AI Robots Committing Criminal Offenses’ (2010) 22 Syracuse Science and Technology Law Reporter 1.
 W H Hitchler, ‘The Physical Element of Crime’ (1934) 39 Dickinson Law Review 95; Michael S Moore, Act and Crime: The Philosophy of Action and Its Implications for Criminal Law (Oxford University Press, 1993).
 John William Salmond, Salmond on Jurisprudence (Glanville Williams ed, 11th ed, 1957) 505; Glanville L Williams, Criminal Law: The General Part (2nd ed, 1961) § 11; Oliver W Holmes, Jr, The Common Law (1923) 54; Walter Wheeler Cook, ‘Act, Intention, and Motive in the Criminal Law’ (1917) 26 Yale Law Journal 645.
 J Ll J Edwards, ‘The Criminal Degrees of Knowledge’ (1954) 17 Modern Law Review 294; Rollin M Perkins, ‘“Knowledge” as a Mens Rea Requirement’ (1978) 29 Hastings Law Journal 953; United States v Youts,  USCA10 296; 229 F.3d 1312 (10th Cir, 2000); United States v Spinney,  USCA1 474; 65 F.3d 231 (1st Cir, 1995); State v Sargent, 594 A.2d 401 (Vt, 1991); State v Wyatt, 482 S.E.2d 147 (W Va, 1996); People v Steinberg, 595 N.E.2d 845 (NY, 1992).
 Jerome Hall, ‘Negligent Behavior Should Be Excluded from Penal Liability’ (1963) 63 Columbia Law Review 632; Robert P Fine and Gary M Cohen, Comment, ‘Is Criminal Negligence a Defensible Basis for Criminal Liability?’ (1966) 16 Buffalo Law Review 749.
 Jeremy Horder, ‘Strict Liability, Statutory Construction and the Spirit of Liberty’ (2002) 118 Law Quarterly Review 458; Francis Bowes Sayre, ‘Public Welfare Offenses’ (1933) 33 Columbia Law Review 55; Stuart P Green, ‘Six Senses of Strict Liability: A Plea for Formalism’ in A P Simester (ed) Appraising Strict Liability (Oxford University Press, 2005) 1; A P Simester, ‘Is Strict Liability Always Wrong?’ in A P Simester (ed) Appraising Strict Liability (Oxford University Press, 2005).
 Morrisey v State, 620 A.2d 207 (Del, 1993); Conyers v State, 790 A.2d 15 (Md, 2002); State v Fuller, 552 S.E.2d 282 (S C, 2001); Gallimore v Commonwealth, 436 S.E.2d 421 (Va, 1993).
 Dusenbery v Commonwealth, 263 S.E.2d 392 (Va, 1980).
 United States v Tobon-Builes,  USCA11 670; 706 F.2d 1092 (11th Cir, 1983); United States v Ruffin,  USCA2 1053; 613 F.2d 408 (2d Cir, 1979); See more in Gabriel Hallevy, ‘Victim's Complicity in Criminal Law’ (2006) 2 International Journal of Punishment and Sentencing 72.
 See, eg, Regina v Manley, (1844) 1 Cox’s Criminal Cases 104; Regina v Cogan,  EWCA Crim 2;  QB 217; Gabriel Hallevy, Theory of Criminal Law (vol 2, 2009) 700-06.
 The AI is used as an instrument and not as a participant, although it uses its features of processing information. See, eg, Cary G Debessonet and George R Cross, ‘An Artificial Intelligence Application in the Law: CCLIPS, A Computer Program that Processes Legal Information’ (1986) 1 High Technology Law Journal 329.
 For some of the modern advanced capabilities of the modern AI entities see generally Donald Michie, ‘The Superarticulacy phenomenon in the context of software manufacture’ in Derek Partridge and Yorick Wilks (eds) The Foundations of Artificial Intelligence (Cambridge University Press, 2006) 411-439.
 Cf Andrew J Wu, ‘From Video Games to Artificial Intelligence: Assigning Copyright Ownership to Works Generated by Increasingly Sophisticated Computer Programs’ (1997) 25 American Intellectual Property Law Association Quarterly Journal 131; with Timothy L Butler, ‘Can a Computer be an Author? Copyright Aspects of Artificial Intelligence’ (1982) 4 Comm/Ent Law Journal 707.
 The programmer or user should not be held criminally responsible for the autonomous actions of the AI if he or she could not have predicted these actions.
 Nicola Lacey and Celia Wells, Reconstructing Criminal Law – Critical Perspectives on Crime and the Criminal Process (Butterworths, 2d ed, 1998) 53.
 United States v Powell,  USCADC 191; 929 F.2d 724 (DC Cir, 1991).
 William L Clark and William L Marshall, A Treatise on the Law of Crimes (7th ed, 1967) 529; Francis Bowes Sayre, Criminal Liability for the Acts of Another, 43 (1930) Harvard Law Review 689; People v Prettyman, 926 P.2d 1013 (Cal, 1996); Chance v State, 685 A.2d 351 (Del, 1996); Ingram v United States, 592 A.2d 992 (DC, 1991); Richardson v State, 697 N.E.2d 462 (Ind, 1998); Mitchell v State, 971 P.2d 813 (Nev, 1998); State v Carrasco, 928 P.2d 939 (NM Ct App, 1996); State v Jackson, 976 P.2d 1229 (Wash, 1999).
 State v Kaiser, 918 P.2d 629 (Kan, 1996); United States v Andrews,  USCA9 312; 75 F.3d 552 (9th Cir, 1996).
 Robert P Fine and Gary M Cohen, Comment, ‘Is Criminal Negligence a Defensible Basis for Penal Liability?’ (1966) 16 Buffalo Law Review 749; Herbert L A Hart, ‘Negligence, Mens Rea and Criminal Liability’ in Anthony Gordon Guest (ed) Oxford Essays in Jurisprudence (Clarendon Press, 1961) 29; Donald Stuart, ‘Mens Rea, Negligence and Attempts’ (1968) Criminal Law Review 647.
 Model Penal Code – Official Draft and Explanatory Notes sec 2.06, (1985) 31-32 (hereinafter ‘Model Penal Code’); State v Linscott, 520 A.2d 1067 (Me, 1987).
 Regina v Cunningham, (1957) 2 QB 398; Regina v Faulkner, (1876) 13 Cox’s Criminal Cases 550; United States v Greer,  USCA7 249; 467 F.2d 1064 (7th Cir, 1972); People v Cooper, 743 N.E.2d 32 (Ill, 2000); People v Weiss, 9 N.Y.S.2d 1 (NY App Div, 1939); People v Little, 107 P.2d 634 (Cal Dist Ct App, 1941); People v. Cabaltero, 87 P.2d 364 (Cal Dist Ct App, 1939); People v Michalow, 128 N.E. 228 (NY, 1920).
 Cf, eg, Steven J Frank, ‘Tort Adjudication and the Emergence of Artificial Intelligence Software’ (1987) 21 Suffolk University Law Review 623; Sam N Lehman- Wilzig, ‘Frankenstein Unbound: Towards a Legal Definition of Artificial Intelligence’ (1981) 13 Futures 442; Marguerite E Gerstner, ‘Liability Issues with Artificial Intelligence Software’ (1993) 33 Santa Clara Law Review 239; Richard E Susskind, ‘Expert Systems in Law: A Jurisprudential Approach to Artificial Intelligence and Legal Reasoning’ (1986) 49 Modern Law Review 168.
 William James, The Principles of Psychology (1890); Hermann von Helmholtz, The Facts of Perception (1878). In this context knowledge and awareness are identical. See, eg United States v Youts,  USCA10 296; 229 F.3d 1312 (10th Cir, 2000); State v Sargent, 594 A.2d 401 (Vt, 1991); United States v Spinney,  USCA1 474; 65 F.3d 231 (1st Cir, 1995); State v Wyatt, 482 S.E.2d 147 (W Va, 1996); United States v Wert-Ruiz,  USCA3 209; 228 F.3d 250 (3d Cir, 2000); United States v Jewell,  USCA9 208; 532 F.2d 697 (9th Cir, 1976); United States v Ladish Malting Co,  USCA7 231; 135 F.3d 484 (7th Cir, 1998); Model Penal Code, above n 22, § 2.02(2)(b).
 Margaret A Boden, ‘Has AI Helped Psychology?’ in Derek Partridge and Yorick Wilks (eds) The Foundations of Artificial Intelligence (Cambridge University Press, 2006) 108; Partridge, above n 14, 112; David Marr, ‘AI: A Personal View’ in Derek Partridge and Yorick Wilks (eds) The Foundations of Artificial Intelligence (Cambridge University Press, 2006) 97.
 Daniel C Dennett, ‘Evolution, Error, and Intentionality’ in in Derek Partridge and Yorick Wilks (eds) The Foundations of Artificial Intelligence (Cambridge University Press, 2006) 190; B Chandrasekaran, ‘What Kind of Information Processing is Intelligence?’ in Derek Partridge and Yorick Wilks (eds) The Foundations of Artificial Intelligence (Cambridge University Press, 2006) 14.
 Robert Batey, ‘Judicial Exploration of Mens Rea Confusion, at Common Law and Under the Model Penal Code’ (2001) 18 Georgia State University Law Review 341; State v Daniels, 109 So. 2d 896 (La, 1958); Carter v United States,  USSC 49; 530 U.S. 255 (2000); United States v Randolph,  USCA9 2765; 93 F.3d 656 (9th Cir, 1996); United States v Torres,  USCA7 1076; 977 F.2d 321 (7th Cir, 1992); Frey v State, 708 So. 2d 918 (Fla, 1998); State v Neuzil, 589 N.W.2d 708 (Iowa, 1999); People v Disimone, 650 N.W.2d 436 (Mich Ct.App, 2002); People v Henry, 607 N.W.2d 767 (Mich Ct App, 1999).
 Wayne R LaFave, Criminal Law (Thomson/West, 4th ed, 2003) 244-249.
 For the Intent-to-Kill murder see LaFave, ibid, 733-34.
 See, eg Elizabeth A Boyd et al, ‘“Motivated by Hatred or Prejudice”: Categorization of Hate-Motivated Crimes in Two Police Divisions’ (1996) 30 Law and Society Review 819; ‘Crimes Motivated by Hatred: The Constitutionality and Impact of Hate Crimes Legislation in the United States’ (1995) 1 Syracuse Journal of Legislation and Policy 29.
 John C Smith, Justification and Excuse in the Criminal Law (Stevens, 1989); Anthony M Dillof, ‘Unraveling Unknowing Justification’ (2002) 77 Notre Dame Law Review 1547; Kent Greenawalt, ‘Distinguishing Justifications from Excuses’ (1986) 49 Law and Contemporary Problems 89; Kent Greenawalt, ‘The Perplexing Borders of Justification and Excuse’ (1984) 84 Columbia Law Review 1897; Thomas Morawetz, ‘Reconstructing the Criminal Defenses: The Significance of Justification’ (1986) 77 Journal of Criminal Law & Criminology 277; Paul H Robinson, ‘A Theory of Justification: Societal Harm as a Prerequisite for Criminal Liability’ (1975) 23 UCLA Law Review 266; Paul H Robinson and John M Darley, ‘Testing Competing Theories of Justification’ (1998) 76 North Carolina Law Review 1095.
 Michael A Musmanno, ‘Are Subordinate Officials Penally Responsible for Obeying Superior Orders which Direct Commission of Crime?’ (1963) 67 Dickinson Law Review 221.
 Peter Arenella, ‘Convicting the Morally Blameless: Reassessing the Relationship Between Legal and Moral Accountability’ (1992) 39 UCLA Law Review 1511; Sanford H Kadish, ‘Excusing Crime’ (1987) 75 California Law Review 257; Andrew E Lelling, ‘A Psychological Critique of Character-Based Theories of Criminal Excuse’ (1998) 49 Syracuse Law Review 35.