AustLII Home | Databases | WorldLII | Search | Feedback

University of New South Wales Law Journal Student Series

You are here:  AustLII >> Databases >> University of New South Wales Law Journal Student Series >> 2022 >> [2022] UNSWLawJlStuS 25

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Tong, Stephanie --- "You Won'T Believe What She Does!': an Examination into the Use of Pornographic Deepfakes as a Method of Sexual Abuse and the Legal Protections Available to its Victims" [2022] UNSWLawJlStuS 25; (2022) UNSWLJ Student Series No 22-25


“YOU WON’T BELIEVE WHAT SHE DOES!”: AN EXAMINATION INTO THE USE OF PORNOGRAPHIC DEEPFAKES AS A METHOD OF SEXUAL ABUSE AND THE LEGAL PROTECTIONS AVAILABLE TO ITS VICTIMS

STEPHANIE TONG

I INTRODUCTION

Women have ‘historically been [the] primary targets for malicious adaptations of new technologies,’[1] and this has never been truer than when considering deepfake technology. Deepfakes are fabricated videos created by superimposing an individual’s face onto a body of another person via Artificial Intelligence (‘AI’) technology. While there are innocent, creative usages of deepfakes, pornographic deepfakes have exploded in popularity, where abusers create and distribute videos where victims’ faces have been non-consensually inserted into pornographic videos.[2] With 96% of deepfakes online being pornographic in nature, and with over 100,000 unique women impacted,[3] it is essential that appropriate legal remedies are available to its victims. Recently, harsher civil and criminal penalties for image based sexual abuse have been introduced in Australia. However, these responses are largely insufficient to provide adequate protections to victims of deepfake abuse.[4]

This paper will examine how deepfakes are used as a method of sexual abuse, and the legal protections that available to its victims. However, as deepfakes are an emerging area in research, only a limited number of studies examine the harms victims face and their place in the legal environment. Instead, most literature stems from other existing forms of image based sexual abuse—‘the non-consensual creation, distribution or threat to distribute nude or sexual imagery’[5]—as deepfakes fall under the broad scope of this definition. This paper seeks to fill this gap in literature by comparing and contrasting pornographic deepfakes with other forms of image based sexual abuse, to demonstrate the unique effects of deepfake abuse and highlight the gaps in legal protection for its victims.

This paper will argue that current civil and criminal legislation is insufficient to adequately protect victims of deepfake abuse. Instead, both legislative and extralegal changes are necessary to combat these significant harms. It will proceed in five parts. Part II will begin with an overview into how and why pornographic deepfakes are created, and the harms that arise from these acts, to establish that they are fundamentally a gendered form of abuse. Part III will then outline the existing civil and criminal legislation in Australia which penalise deepfake creation and distribution, before examining the reasons why it is difficult to police deepfake abuse. Part IV will present legislative and extralegal solutions to combat these difficulties. Part V concludes.

II BACKGROUND

A How are pornographic deepfakes created?

Pornographic deepfakes first emerged in 2017, when user u/deepfakes posted deepfakes utilising the faces of female celebrities and the bodies of pornography actors onto the social media site Reddit. These deepfakes were, and still are, generated using open-source AI technology. This means that the technology is available for the public to ‘run, copy, distribute, study, change, share and improve for any purpose’[6] on a wide scale. Accordingly, the methodology to create deepfakes has drastically improved, making deepfake creation much more widely accessible.[7] While the creation of deepfakes would initially take weeks or months, it now only takes minutes or hours,[8] and is only a five-step process requiring no more than a computer and two high quality videos of the subject whose face is being transposed.[9] The software is often free and available on computers or smartphones.[10] This ease of creation has led to an exponential increase in the number of pornographic deepfakes, with numbers online doubling every six months since their first appearance in 2017.[11]

B Pornographic deepfakes as a gendered form of abuse

In order to establish whether pornographic deepfakes are a gendered form of abuse, it is necessary to examine its victims and perpetrators. Deepfake pornography typically has two victims: the individual whose face was used, and the adult performer whose pornographic video was used.[12] The demographics of these victims follow the general trend of image based sexual abuse in that they are disproportionately women. However, pornographic deepfakes appear to target women more so than in comparison to other forms of image based abuse—in 2019, 100% of victims of pornographic deepfakes were female,[13] whilst 66% of victims were reported as female in traditional forms of image based abuse.[14] Perpetrator demographics similarly align with other forms of image based sexual abuse —pornographic deepfakes are ‘predominantly produced by and for a male audience’.[15] It is important to note that perpetrators are not limited to the individual who created the deepfake and first uploaded it onto the internet. This is because this act of distribution allows for the rapid dissemination of such imagery online by the ‘continual distribution by third parties’.[16] Accordingly, by acknowledging that women are disproportionately the victims of this abuse, and that the perpetrators who create and distribute deepfakes are predominately male, it is clear that pornographic deepfakes are fundamentally a gendered form of abuse.

C Why are deepfakes created and what harm arises?

In understanding the gendered nature of this abuse, we can then better examine why abusers create and distribute pornographic deepfakes, and the harms that arise. Generally, these reasons and harms mirror that of traditional forms of image based sexual abuse. However, there are several key differences that arise.

1 Harms as a subset of image based sexual abuse

(a) Revenge Porn

As a subset of image based sexual abuse, pornographic deepfakes are often used as ‘revenge porn’. This includes deepfakes which were created and distributed by a malicious ex-partner without the permission of the individual whose face is used, with the goal of ‘getting back’ at a victim for some perceived harm. In this case, perpetrators are motivated to ‘terrorize and inflict pain’[17] on the victim.

At the time of writing, only one study examines the harms that arise when deepfakes are distributed specifically due to this motive. However, in this study, Flynn et al. noted that the harms identified were similar to those harms caused by other forms of image based sexual abuse. For example, victims of deepfake abuse note that ‘it impacts you emotionally, physiologically, professionally, in dating and relationships, in every single factor of your life’.[18] Emotionally, victims not only felt an ‘all-encompassing devastation or disruption of everyday life and relationships’,[19] but also state that they continue to experience ‘“visceral fear” linked with the constant uncertainty over who has or will see the images and whether they may reappear.’[20] Some victims also face significant amounts of harassment, both online and in person, as ‘cyber-mobs’ will often ‘compete to be the most offensive [and] the most abusive’[21]. Physiologically, some victims noted that they experienced a physical response to their abuse—for example, a victim reported that they ‘gained weight ... [and] took up smoking’.[22] Additionally, this abuse can manifest in physical threats. In severe cases, victims are doxed—where their personal information such as mobile phone number and address released to the public—resulting in ‘the very real threat of physical violence’[23] being made against the victim. Professionally, victims often suffer collateral consequences as ‘employers often decline to interview or hire people because their search results featured “inappropriate photos.”,’[24] even when it is well established that the imagery is fake. Thus, it is clear that deepfake abuse can cause significant long-term harms to victims.

(b) Sexual gratification

While other forms of image based sexual abuse are typically consensually created, deepfakes are not. As technology can now ‘easily generate ... deepfake images of celebrities, people they know, or anyone whose imagery they can access’,[25] a large subset of abusers simply seek personal sexual gratification, with no desire to distribute this imagery.[26] This is an important distinction—while the first victims of deepfakes were celebrities, this demographic has shifted, with 63% of respondents to an anonymous online poll in 2019 noting that they wished to create deepfakes of ‘familiar girls, whom I know in real life’.[27] Öhman argues that there is nothing ethically wrong with this as such deepfakes are ‘no more than a virtual image generated by informational input that is publicly available,’[28] akin to a sexual fantasy. However, this is not the case. The mere act of creating deepfakes violates the sexual privacy of the victims via a ‘thievery of autonomy’,[29] as they are essentially ‘a form of virtual sexual coercion and abuse that allows people to virtually undress and take advantage of women they know’.[30] Not only does this ‘violate individuals’ expectations that sexual activity be founded on consent,’[31] but as Citron and Chesney state, ‘being able to reveal one’s naked body ... at the pace and in the way of one’s choosing is crucial to identity formation. When the revelation of people’s sexuality or gender is out of their hands at pivotal moments, it can shatter their sense of self.’[32] This can cause immeasurable harm. For example, ‘it can sometimes wreak lasting effects on one’s ability to trust loved ones or friends, impacting the ability to develop intimate relations’.[33]

(c) To purposely cause harm

There is also a subset of perpetrators who create and distribute images with the purpose of causing harm and distress to the victim, but who are not motivated by revenge.[34] One key example of this are deepfakes which are created with the intention of ‘silenc[ing] the critical voices of women who speak out against sexual violence’.[35] This scenario has played out previously—in April 2018, Rana Ayyub, an investigative journalist, was subjected to a smear campaign where pornographic deepfakes of her were created and widely distributed on social media after she wrote about the rape of an eight-year-old child in India, and how one of the two major Indian Political Parties, Bharatiya Janata Party, was supporting the accused.[36] The purpose of this was to discourage her from reporting on this matter, and the harm she suffered was significant—she was eventually hospitalised for stress related injuries after the deepfakes went viral.[37] This is only one example of deepfakes being used in this manner. For example, Alexandria Ocasio-Cortez, an American Democratic Congresswoman, had pornographic deepfakes where her face was edited onto another women’s body distributed in 2019.[38] Hence, it is clear that there are a number of perpetrators creating and distributing pornographic deepfakes to purposely cause significant harm to the victim.

2 Harms unique to deepfakes

While pornographic deepfakes are a subset of image based sexual abuse, there are still a number of harms that are completely unique to deepfakes. For example, deepfake abuse has two victims. In examining the harms so far, this paper has focused on the harm faced by the victims whose faces are used, but not on the adult performer whose body is being used. This aligns with the research in this area—the harm that adult performers face is often overlooked as they ‘aren’t the direct subjects of abuse’.[39] However, this is not to mean that this harm is not insignificant. For example, while adult performers will often be unaware that deepfakes are being produced using their work, this ‘[does not] reduce the sense of violation that many industry members feel’[40] when such abuse occurs. In addition, adult performers report feeling ‘violat[ed]’[41] when finding out that their imagery was used in a deepfake, as ‘[they] don’t want to feel like [they’re] a part of [the harassment of other women]’[42] Simply put, they are essentially turned into ‘digital puppets, manipulated without any concern for their humanity and dignity’.[43]

Additionally, the harms that arise from deepfake abuse is fundamentally different as the capacity for abuse with pornographic deepfakes is far greater than in comparison to other forms of image based sexual abuse. This is because abusers can create deepfakes of anyone who has images of themselves online, meaning that they are not limited by geographic, relational, or other boundaries as they are with traditional forms of image based sexual abuse. This essentially means that ‘victims can be sexually abused at [the] whim’[44] of perpetrators who they do not know, thereby ‘transform[ing] rape threats into a terrifying virtual reality.’[45] Additionally, the harms that arise is exasperated in comparison to traditional forms of image based sexual abuse as victims ‘may not ever be aware that her privacy was invaded, may have never participated in any intimate actions with her abuser, and may have never even met or interacted with her abuser’,[46] preventing harm mitigation as ‘no measures can be taken ... when the harm is unknown to the victim’.[47]

Hence, it is clear that deepfakes are primarily created and distributed with the goal of ‘demeaning, objectifying, and humiliating women’,[48] and can cause immeasurable harm to victims. Thus, it is essential that appropriate legal penalties for the creation and distribution of deepfake abuse exists.

III THE CURRENT STATE OF THE LAW AND ITS LIMITATIONS

Recently, there has been increased criminalisation of image based sexual abuse in multiple jurisdictions, but this has ‘not automatically translated into increased prosecutions.’[49] This suggests that there are significant difficulties in policing and legislating against image based sexual abuse generally. While Henry et al. has identified several of these challenges,[50] little research has been done which consider deepfakes specifically. However, an examination of how these difficulties apply to deepfakes can provide insights into the legislative and cultural changes that are required to adequately police deepfake abuse. These challenges can be split into four components: gaps in legislation, difficulties in enforcing legislation, barriers that exist in reporting abuse, and a lack of understanding about pornographic deepfakes.

A Gaps in legislation

1 Existing legislation

It is essential to begin with an overview of the civil and criminal penalties available for deepfake abuse in Australia.

(a) Civil liability

Victims of deepfake abuse can find recourse via civil penalties. At the federal level, the Online Safety Act 2021 (Cth) came into effect on the 23rd of January 2022.[51] Having been legislated with deepfakes in mind,[52] part 6 of the Online Safety Act provides civil penalties for the non-consensual sharing of intimate images, including images which have been altered.[53] The maximum civil penalty for the distribution or threat of distribution of such images is $111,000 (500 penalty units).[54]

Promisingly, the Online Safety Act provides the capacity for the eSafety Commissioner to provide a removal notice to online providers (e.g., Facebook) which host intimate imagery, requiring them to remove the image within as short of a time period as 24 hours,[55] and penalise those who fail to do so.[56] This is an essential step to preventing harm, as ‘detection without removal offers little solace to those exploited by deepfake pornography’.[57]

(b) Criminal liability

Federally, there are no specific criminal offenses for deepfake abuse in Australia.[58] However, Australia has criminal penalties for the capture and distribution of image based sexual abuse material at the state level. The distribution of deepfakes is likely captured by legislation in the Australian Capital Territory, New South Wales, South Australia, Western Australia, Northern Territory and Queensland which specify that ‘altered’ image based sexual abuse material falls within their scope.[59] However, Victorian legislation does not have this distinction, meaning abusers are unlikely to be prosecuted there.[60]

2 Deficiencies in Australian legislation

While Australia has been ‘heralded as the leading authority in [image based sexual abuse] law,’[61] there are three major deficiencies in Australian legislation. First, both civil and criminal legislation do not penalise the creation and possession of pornographic deepfakes. Second, while the Online Safety Act provides updated civil liabilities, it falls short in preventing harm to all victims of deepfake abuse. Finally, even with updated criminal legislation, it is still difficult to prosecute the distribution of deepfakes.

(a) Lack of penalties for the creation and possession of pornographic deepfakes

In Australia, civil and criminal legislation do not penalise the creation and possession of pornographic deepfakes. This is because of the wording of the legislation—in essence, penalties are applied for the ‘capture’ of intimate imagery, but as deepfakes are created and not captured, it is unlikely that they will fall within the scope of these legal provisions.[62]

There are many reasons for this. First, ‘the law lags, struggling to keep place with rapidly evolving technology’.[63] Deepfakes first emerged in 2017, meaning that, in many cases, the law has not been updated to penalise deepfakes. For example, the criminalisation of the capture of intimate images in Victoria was introduced in 2014, prior to the first recorded instance of deepfake imagery. However, this is not always the case. For example, the Online Safety Act was established in 2021 and NSW amended their Crimes Act to specifically include deepfake abuse in 2017. In turn, it is likely that such actions are not penalised in order to ensure that innocent activities are not criminalised—the Hon Mark Speakman, Attorney-General, said in the second reading speech for the Crimes Amendment (Intimate Images) Bill 2017 (NSW) that ‘these offences have been carefully drafted ... to strike the right balance between criminalising unacceptable behaviour and ensuring that innocent activities are not captured.’[64] This is an important consideration when it comes to the creation of deepfakes, as it has been well established that ‘if done correctly and consensually, deepfakes can be used for benevolent, or at least recreational, purposes: education, art, film [and] games’.[65]

However, this lack of penalties is unsatisfactory, as the mere act of creating and possessing deepfakes causes significant harm. Additionally, given how simple it is to create such imagery, it is crucial that abusers are deterred from creating and possessing deepfakes in the same way as they are for its capture and possession. As it has long been established that ‘criminali[s]ing bad acts is the most effective deterrent against bad actors,’[66] and that ‘civil liability ... incentivize[s] actors to conform to societal norms of acceptable behaviour’,[67] both criminal and civil penalties for the creation and possession of pornographic deepfakes must be enacted.

(b) Inadequacies of civil penalties

While the Online Safety Act imposes civil penalties on perpetrators of deepfake abuse, this approach has several limitations. First, it is costly to litigate civil cases, limiting this avenue to those who have the financial and temporal resources to do so.[68] Adding to this, there is ‘an important and realistic’[69] possibility that the abuser is judgment proof—where they are unable to pay the amount which they have ordered to pay.[70] Second, undergoing civil litigation may exacerbate the harm caused to the victim. This is because it forces the victim to relive the abuse and can at times result in the wide dissemination of the deepfake.[71] While a pseudonym order could be granted in such proceedings which could resolve this issue, this is still an important factor which limits the adequacy of civil penalties. However, this is not to suggest that civil penalties are wholly ineffective remedies. As Kirchengast and Croft note, ‘given the lack of available data on ‘revenge porn’ offences and offending’,[72] civil penalties can still provide a valuable avenue for victims to seek redress where criminal penalties are unavailable.

(c) Difficult to prosecute the distribution of deepfakes under existing criminal legislation

At the time of writing, there have been no prosecutions for deepfake abuse. Considering other, more common forms of image based sexual abuse, such as revenge porn, the number of prosecutions is still low. Despite the high prevalence of image based sexual abuse distribution in Australia, with 1 in 4 Australians impacted, there were only 415 cases with 62 arrests for the distribution of intimate images in Victoria between 1 January 2015 and 18 July 2017.[73] While several factors impact prosecution rates, two major problems arise with the existing criminal legislation when considering the prosecution of deepfake abuse specifically: the lack of specific deepfake legislation, and the fact that in certain states, image based sexual abuse is criminalised as summary offences.

(i) Lack of specific deepfake legislation

At the state level, Victoria does not criminalise the distribution of deepfakes. In these cases, ‘law enforcement and prosecution agencies are often left unable to pursue complaints and secure convictions’[74] for deepfake abuse. Instead, such behaviour can only be prosecuted if they fall under broader criminal offences such as stalking or indecency. This is not a good solution, as such offences are too broad to appropriately ‘capture the different forms of conduct, motives and harm’[75] related to deepfake abuse. For example, under stalking offences, it must be demonstrated that the defendant had engaged in repeated acts directed towards the victim.[76] While this has already been established as a difficult bar to reach when it comes to image based sexual abuse,[77] it is even more so with regards to deepfakes as in many cases, the victim is entirely unaware of who the abuser is. In turn, it is essential that these states fill these gaps in legislation.

Australia also lacks a federal offence which criminalises image based sexual abuse. Instead, offenders can be charged under s 474.17(1) of the Criminal Code 1995 (Cth) which makes it an offence to use a carriage service in a way that is menacing, harassing or offensive.[78] However, this is insufficient. First, this legislation is ‘overly broad’,[79] as neither ‘menacing’, ‘harassing’ or ‘offensive’ are defined in the legislation, meaning that their ‘ordinary’ meanings are to be used.[80] Additionally, this offence has a fault element requiring recklessness, which can be a high bar to reach.[81] Accordingly, this offence ‘is not commonly being used for image based sexual abuse prosecutions’.[82]

(ii) Summary offences

Victorian and South Australian image based sexual abuse offences have been criticised as they are implemented as summary offences,[83] as such offences ‘limi[t] law enforcement responses, including removing powers of arrest and the ability to apply for a search warrant and seize devices.’[84] This problematically prevents prosecution of deepfake abuse—if law enforcement are unable to seize the devices on which abusers create, possess and distribute pornographic deepfakes, it makes it extremely difficult for abusers to be prosecuted.

B Challenges in Enforcing Legislation

1 Attribution

Law enforcement and prosecutors face significant obstacles in procuring evidence of deepfake abuse. In particular, both civil and criminal liability requires the attribution of the abuser to the imagery in question—simply put, neither civil nor criminal liability can ‘ameliorate harms caused by deep fakes if [victims] cannot tie them to their creators’.[85] Problematically, while all deepfakes have metadata which links it to a specific IP address, an identifier for the device on which the deepfake was created or distributed, it is simple for perpetrators to obfuscate them.[86] For example, they can use free-to-access Virtual Private Networks (‘VPNs’) to do so, making it almost ‘impossible to find and trace ... the responsible parties’.[87] Pornographic deepfakes face two unique challenges here. First, perpetrators of traditional forms of image based sexual abuse are typically known to their victims, while this is not always the case with deepfakes. Second, in 2019, Henry and Flynn identified that the distribution of pornographic deepfakes often occurred on ‘private sharing sites’, where the objective was to distribute imagery with no identifying information about either the victim or the distributer.[88] In these cases, it is almost impossible to identify the perpetrator.

Unfortunately, there is little that can be done in this situation without an expansion of legislative powers, which provide police with increased investigative powers. For example, the Telecommunications (Interception and Access) Act 1979 (Cth) requires telecommunication service providers to retain data relating to services they offer for a minimum of 2 years, including information such as names, addresses, IP addresses, and phone numbers.[89] Section 110 of the Act allows for several law enforcement agencies (including the Australian Federal Police)[90] to access this data. Accordingly, if a federal offence criminalising deepfake abuse is introduced, it is likely that Australian Federal Police will be able to make use of this data to increase attribution rates in comparison to police at the state level.

However, as is, the onus is often placed on the victim to produce evidence of attribution.[91] Problematically put, ‘[i]f you’re someone who doesn’t know the perpetrator, there is no justice afforded to you, and there’s no laws that can actually be handed down.’[92]

2 Jurisdictional Challenges

Similar to other forms of image based sexual abuse, the prosecution of deepfakes face significant jurisdictional challenges. This is because image based sexual abuse are easily disseminated via the internet across interstate and international jurisdictions, and in turn, are ‘beyond the jurisdictional limits of any single state’.[93] This is a particular challenge with pornographic deepfakes, as the ‘global nature of online platforms’[94] means that any individual in the world can create deepfakes of anyone who has images of themselves available on the internet. Essentially, if the individual distributes pornographic deepfakes overseas, ‘the perpetrator will not be subject to Australian laws unless a territorial nexus exists’.[95] Additionally, if the victim and abuser reside in different states, that ‘the punishment imposed and remedies provided ... depend on the state in which the victims or perpetrators reside’.[96] When considering the inconsistent nature of legislation in Australia, it means that some perpetrators may escape criminal prosecution—for example, if the abuser resides in Victoria, they will likely not face any criminal charges for such abuse. Similarly, the Online Safety Act only provides civil penalties if the victim and abuser are ordinarily residents of Australia.[97] While this is a well-known limitation of civil liability, it is nonetheless problematic as it limits the recourse available to victims.

C Barriers to reporting

1 Gendered nature of deepfakes

As previously established, deepfakes are fundamentally gendered. Problematically, a general lack of understanding of gendered violence contributes to an ‘incredibly low’[98] reporting rate for such offences.[99] These barriers include victim-blaming and harm minimization attitudes.

(a) Victim-blaming

First, victims of traditional image based sexual abuse face significant victim-blaming—where individuals ‘hold the victim at least partially responsible for the incident’[100] because of their behaviour, generally with the idea that the victim ‘shouldn’t have taken those photos in the first place’.[101] While this is not directly applicable to pornographic deepfakes, as they are completely fabricated imagery, victim-blaming persists. Victims report being told to ‘be careful what you put on Facebook, anyone can take anything and if it’s public they can do whatever they want’,[102] resulting in a framing of this abuse as a problem of ‘naiveté rather than gender based violence’.[103] This reflects inherent biases which exist when it comes to deepfake abuse, where blame focuses on ‘attributing another’s [behaviour] more to internal than to situational causes’.[104] In turn, victims are less likely to report such abuse.

(b) Harm minimization attitudes

Additionally, like victims of image based sexual abuse, victims of deepfake abuse also face harm minimisation attitudes. However, pornographic deepfakes bring a new dimension to this discussion as they have been ‘considered less ‘real’ than other forms of image based sexual abuse [105] as they aren’t genuine images of the victim. For example, victims report being reluctant to report deepfake abuse because they ‘didn’t feel the crime was serious enough, because no actual violence had been committed [and] there weren’t any real pictures of [them]’.[106] When victims did report the abuse, it has been demonstrated that police and lawmakers hold attitudes that deepfake abuse is not as harmful as other forms of image based sexual abuse.[107] Accordingly, these harm minimization attitudes are a strong barrier which reduces the reporting rate of such offences.

2 Underenforcement of image based sexual abuse offences

It is a well-established phenomenon that image based sexual abuse offenders receive lighter punishments as ‘their crimes are not seen as serious’[108] as other offences. While there is limited information about whether any sentences have been handed down for the distribution of pornographic deepfakes, an examination from the perspective of image based sexual abuse generally remains helpful. In 2020, the Sentencing Advisory Council released a report concerning the sentencing of image based sexual abuse offences in Victoria. They found that in 49% of cases between 2014-19 where image based sexual abuse was the primary charge, the most serious penalty imposed was a fine, which ‘alone have limited rehabilitative effect’.[109] Additionally, the longest sentence of imprisonment imposed where image based sexual abuse was the primary charge was 14 months, and this involved a high level of offending—the perpetrator captured intimate imagery of multiple victims, including children, for over a year.[110] This level of sentencing is clearly disproportionate to the level of harm that arises from such abuse, and can be attributed to the low maximum penalties available for image based sexual abuse offences in Australia—‘you could expect longer sentences if there were higher maximum penalties. It’s Parliament’s indication of the seriousness they’re attaching to that level of offending.’[111]

D Lack of understanding about deepfake abuse

As Delfino put, ‘for a criminal statute ... to have its full effect, ... it requires that potential perpetrators [and] law enforcement ... gain a greater awareness and understanding of pornographic deepfakes and their harms.’[112] Problematically, a demonstrated lack of understanding about deepfake abuse persists amongst the public and law enforcement.

1 The Public

Importantly, a 2020 study established that less than half of respondents across Australia, New Zealand and the United Kingdom were aware that it is a crime to capture and distribute intimate images non-consensually (45.7% and 48.7% respectively).[113] Additionally, despite pornography being the primary utilisation of deepfakes, media coverage typically follows political deepfakes,[114] suggesting ‘the threat presented by deepfakes pornography is less serious, less genuine, and less significant than the potential political consequence’.[115] This effectively renders the harm arising from pornographic deepfakes and its victims ‘invisible in the public view’.[116] These issues are clearly concerning as if the public does not have knowledge of criminality of these offences and the harms they cause, they ‘have little incentive to refrain from such behaviour’.[117]

2 Law Enforcement

It has been well established that law enforcement has a lack of understanding of image based sexual abuse generally, and this extends to deepfake abuse.[118] This includes both the relevant legislation surrounding image based sexual abuse and the technological landscape that prompted the rise of pornographic deepfakes.[119] Accordingly, it has been recorded that ‘law-enforcement agencies refuse to pursue complaints on the grounds that the conduct is legally insignificant’,[120] or advise ‘victims that there was nothing they could do and provided unhelpful advice’.[121] This has resulted in significant underreporting of such offences: as one victim explained, ‘there’s no point in going to the police, because I know someone else who this happened to and they went to the police and absolutely nothing was done’.[122]

IV SOLUTIONS

A A National Framework for Deepfake Legislation

An effective way to resolve the issues with existing Australian legislation would be the introduction of a federal criminal offence for the creation and distribution of image based sexual abuse which includes pornographic deepfakes. This is because a ‘national, uniform and consistent’[123] offence would ensure that victims around Australia are protected no matter where they are located, combating the jurisdictional issues which exist. It will also ensure that victims who cannot afford to undertake civil litigation are able to seek justice, given that criminal law ‘does not face the same challenges as civil remedies’[124] as law enforcement agencies tend to have more resources than the individual.[125] In addition, it would allow the Australian Federal Police to take advantage of data stored under the Telecommunications (Interception and Access) Act to increase their investigative powers and reduce some of the challenges that exist in policing deepfake abuse.

In addition, Victorian legislators should ensure that pornographic deepfakes will fall within their respective definitions of ‘intimate images’ by including images which have been altered. Victoria and South Australia also should introduce indictable offences for such abuse, as doing so would ‘help to bypass the restrictions faced by police in jurisdictions where summary offences apply’.[126]

Importantly, to ensure that strong legislation is developed, it is essential that the perspectives of victims and researchers are considered during its creation, as was in New South Wales in 2017. Here, legislators called for and considered over thirty submissions about the criminalisation of the distribution of intimate imagery, and accordingly the New South Wales amendment has been praised for ‘avoid[ing] some of the pitfalls of similar legislation in other jurisdictions’.[127]. Hence, this is key in avoiding legislation that ‘represent[s] ad hoc, piecemeal or misplaced interventions that largely fail to understand the problem and, as such, are unable to tackle its root causes’.[128]

B Recommendations for extralegal solutions

Considering both the cultural issues discussed previously, and the fact that the legal system is reactive and ultimately cannot prevent the harm that victims suffer from this abuse, it is essential that extralegal solutions are implemented alongside legislative changes. Simply put, ‘any reform will be ineffective unless accompanied by widespread education’.[129]

This includes, and is not limited to, educating the public, law enforcement, and judiciary about pornographic deepfakes and the harms they cause.[130] This has been demonstrated as an effective way of combating gendered abuse—the widespread availability of information concerning domestic abuse has decreased its occurrence.[131] This should include educating the public on the penalties available for deepfake abuse in order to deter malicious conduct. Promisingly, the framework for this education in Australia has already been established, with the eSafety Commissioner outlining a ‘holistic approach’ to educate the public on the dangers of deepfakes.[132] However, specific training should also be targeted towards law enforcement, which cover the available offences against image based sexual abuse and deepfake abuse, as well as sensitivity training to reduce the rates of underreporting.[133] This training should occur at regular intervals as legislature develops. Consistent and widespread education would result ‘attitude shift[s]’ which focus on ‘understanding the problem and working toward[s] a solution’, allowing ‘the law [to] have its intended effect’.[134]

V CONCLUSION

The technology used to create deepfakes is rapidly evolving, and the number of pornographic deepfakes online is only set to increase. While deepfakes are a subset of image based sexual abuse, this paper demonstrates that there are several key differences which differentiate them. Accordingly, given their capacity to cause significant harm, there is a clear need for strong civil and criminal legislation to adequately mitigate these harms. However, as it currently stands, the penalties available in Australia are insufficient. Given the gaps in the existing legislation and the unique difficulties that arise when policing deepfake abuse, legislative reforms are necessary in the form of a strong federal framework which criminalises and targets deepfake abuse specifically. Additionally, given the existing barriers to reporting and a general lack of understanding about deepfake abuse, extralegal solutions through education initiatives are also required. Essentially, pornographic deepfakes ‘follow in the footsteps of other invasions of sexual privacy’,[135] a new method in the long line of technologies which have quickly evolved to target women. Accordingly, only by acting rapidly can the creation and distribution of deepfake pornography be effectively policed and the harms that arise mitigated.


[1] Sophie Maddocks, ‘‘A Deepfake Porn Plot Intended to Silence Me’: exploring continuities between pornographic and ‘political’ deep fakes’ (2020) 7(4) Porn Studies 415, 417.

[2] Asher Flynn et al, ‘Deepfakes and Digitally Altered Imagery Abuse: A Cross-Country Exploration of an Emerging form of Image-Based Sexual Abuse’ (2021) The British Journal of Criminology (advance), 1.

[3] Asher Flynn, Jonathan Clough and Talani Cooke, ‘Disrupting and Preventing Deepfake Abuse: Exploring Criminal Law Responses to AI-Facilitated Abuse’ in Anastasia Powell, Asher Flynn and Lisa Sugiura (ed), The Palgrave Handbook of Gendered Violence and Technology (Palgrave Macmillan, 2021) 583, 584.

[4] Ibid 586.

[5] Flynn et al (n 2) 1.

[6] Sukhwinder Randhawa, Open Source Software and Libraries (Conference Paper, 2008) 369, 369.

[7] Samantha Cole, ‘We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now’, Vice (online, 25 January 2018) <https://www.vice.com/en/article/bjye8a/reddit-fake-porn-app-daisy-ridley>.

[8] Amelia O’Halloran, The Technical, Legal, and Ethical Landscape of Deepfake Pornography (BSc Thesis, Brown University, 2021) 15.

[9] Cole (n 7).

[10] Ibid.

[11] Sensity, The State of Deepfakes 2020: Updates on Statistics and Trends (Report, November 2020).

[12] Lux Alptraum, ‘Deepfake Porn Harms Adult Performers, Too’, Wired (online, 15 January 2020) <https://www.wired.com/story/deepfake-porn-harms-adult-performers-too>.

[13] Sensity, The State of Deepfakes 2019: Landscape, Threats, and Impact (Report, September 2019) 2.

[14] Office of the eSafety Commissioner, Image-Based Abuse. National Survey: Summary Report (Report, October 2017) 4.

[15] Carl Öhman, ‘Introducing the Pervert’s Dilemma: A Contribution to the Critique of Deepfake Pornography’ (2020) 22 Ethics and Information Technology 133, 134.

[16] Nicolas Suzor, Bryony Seignior and Jennifer Singleton, ‘Non-consensual Porn and the Responsibilities of Online Intermediaries’ [2017] MelbULawRw 16; (2016) 40(3) Melbourne University Law Review 1057, 1066.

[17] Bobby Chesney and Danielle Citron, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security’ (2019) 107 California Law Review 1753, 1773.

[18] Flynn et al (n 2) 11.

[19] Ibid.

[20] Ibid.

[21] Clare McGlynn, Erika Rackley and Ruth Houghton, ‘Beyond ‘Revenge Porn’: The Continuum of Image Based Sexual Abuse’ (2017) 25 Feminist Legal Studies 25, 30.

[22] Flynn et al (n 2) 11.

[23] McGlynn, Rackley and Houghton (n 21) 30.

[24] Chesney and Citron (n 17) 1775.

[25] Flynn et al (n 2) 2.

[26] Chesney and Citron (n 17) 1773.

[27] James Vincent, ‘Deepfake Bots on Telegram Make the Work of Creating Fake Nudes Dangerously Easy’, The Verge (online, 20 October 2020) <https://www.theverge.com/2020/10/20/21519322/deepfake-fake-nudes-telegram-botdeepnude-sensity-report>.

[28] Öhman (n 15) 134.

[29] O’Halloran (n 8) 9—10.

[30] Ibid 7.

[31] Rebecca A Delfino, ‘Pornographic Deepfakes: The Case for Federal Criminalization of Revenge Porn’s Next Tragic Act’ (2018) 88(3) Fordham Law Review 887, 896.

[32] Chesney and Citron (n 17) 1776.

[33] O’Halloran (n 8) 10.

[34] Tegan S Starr and Tiffany Lavis, ‘Perceptions of Revenge Pornography and Victim Blame’ (2018) 12(2) International Journal of Cyber Criminology 427, 427.

[35] Maddocks (n 1) 415.

[36] Rana Ayyub, ‘I Was The Victim Of A Deepfake Porn Plot Intended To Silence Me,’ Huffington Post (online, 21 November 2018) <https://www.huffingtonpost.co.uk/entry/deepfake-porn_uk_5bf2c126e4b0f32bd58ba316>.

[37] Ibid.

[38] Ed Pilkington, ‘Alexandria Ocasio-Cortez hits out at 'disgusting' media publishing fake nude image’, The Guardian (online, 10 January 2019) <https://www.theguardian.com/us-news/2019/jan/10/alexandria-ocasio-cortez-hits-out-at-disgusting-media-publishing-fake-nude-image>.

[39] Lux Alptraum, ‘Deepfake Porn Harms Adult Performers, Too’, Wired (online, 15 January 2020) <https://www.wired.com/story/deepfake-porn-harms-adult-performers-too>.

[40] Ibid.

[41] Ibid.

[42] Ibid.

[43] Ibid.

[44] Chesney and Citron (n 17) 1773.

[45] Ibid.

[46] O’Halloran (n 8) 11.

[47] Ibid.

[48] Chandell Gosse and Jacquelyn Burkell, ‘Politics and porn: how news media characterizes problems presented by deepfakes’ (2020) 37(5) Critical Studies in Media Communication 497, 499.

[49] Nicola Henry, Asher Flynn and Anastasia Powell, ‘Policing image-based sexual abuse: stakeholder perspectives’ (2018) 19(6) Police Practice and Research 565, 565 (‘Policing image-based sexual abuse’).

[50] Ibid.

[51] Online Safety Act 2021 (Cth).

[52] Explanatory Memorandum, Online Safety Bill 2021, 28.

[53] Online Safety Act 2021 (n 51) s 15(5).

[54] Ibid s 75(a).

[55] Ibid s 77(1)(f)(i).

[56] Ibid s 80.

[57] O’Halloran (n 8) 27.

[58] Henry, Flynn and Powell, ‘Policing image-based sexual abuse’ (n 49) 569.

[59] Crimes Act 1900 (ACT) s 72A, Crimes Act 1900 (NSW) s 91N, Summary Offences Act 1953 (SA) s 26A, Criminal Code Act Compilation Act 1913 (WA) s 221BA, Criminal Code Act 1983 (NT) s 208AA, Criminal Code Act 1899 (Qld) s 207A.

[60] Summary Offences Act 1966 (Vic) s 40.

[61] Flynn et al (n 2) 13.

[62] Flynn, Clough and Cooke (n 3) 596.

[63] Delfino (n 31) 903.

[64] Kcasey McLoughlin and Alex O’Brien, ‘Feminist Interventions in Law Reform: Criminalising Image-Based Sexual Abuse in New South Wales’ (2019) 8(4) Laws 35, 43.

[65] O’Halloran (n 8) 6.

[66] Delfino (n 31) 887.

[67] Ibid 902.

[68] Ibid.

[69] Steven Shavell, ‘The Judgment Proof Problem’ (1986) 6 International Review of Law and Economics 45, 45.

[70] Ibid.

[71] Delfino (n 31) 902.

[72] Tyrone Kirchengast and Thomas Crofts, Submission to NSW Department of Justice, Submission from Dr Tyrone Kirchengast and Professor Thomas Crofts regarding the Discussion Paper. The sharing of intimate images without consent - ‘revenge porn’ (21 October 2016) 1.

[73] Henry, Flynn and Powell, ‘Policing image-based sexual abuse’ (n 49) 570.

[74] Alyse Dickson, ‘‘Revenge Porn’: A Victim Focused Response’ (2016) 2 University of South Australia Student Law Review 42, 50.

[75] Ibid.

[76] Ibid 51.

[77] Henry, Flynn and Powell, ‘Policing image-based sexual abuse’ (n 49) 570.

[78] Criminal Code 1995 (Cth) s 474.17(1).

[79] Henry, Flynn and Powell, ‘Policing image-based sexual abuse’ (n 49) 570.

[80] Flynn, Clough and Cooke (n 3) 590.

[81] Ibid 591.

[82] Henry, Flynn and Powell, ‘Policing image-based sexual abuse’ (n 49) 570.

[83] Ibid.

[84] Ibid.

[85] Chesney and Citron (n 17) 1792.

[86] Ibid.

[87] Ibid.

[88] Flynn et al (n 2) 12.

[89] Telecommunications (Interception and Access) Act 1979 (Cth) s 187A.

[90] Ibid s 110.

[91] Henry, Flynn and Powell, ‘Policing image-based sexual abuse’ (n 49) 572.

[92] Flynn et al (n 2) 12.

[93] Delfino (n 31) 927.

[94] Chesney and Citron (n 17) 1792.

[95] Dickson (n 74) 51.

[96] Delfino (n 31) 927.

[97] Online Safety Act 2021 (n 51) s 75(1).

[98] Noelle Martin, ‘Image-Based Sexual Abuse and Deepfakes: A Survivor Turned Activist’s Perspective’ in Anastasia Powell, Asher Flynn and Lisa Sugiura (ed), The Palgrave Handbook of Gendered Violence and Technology (Palgrave Macmillan, 2021) 55, 68.

[99] Henry, Flynn and Powell, ‘Policing image-based sexual abuse’ (n 49) 574.

[100] Rebecca Hayes and Katherine Lorenz, ‘Victim Blaming Others’ (2013) 8(3) Feminist Criminology 202, 203.

[101] Henry, Flynn and Powell, ‘Policing image-based sexual abuse’ (n 49) 574.

[102] Flynn et al (n 2) 14.

[103] Nicola Henry and Anastasia Powell, ‘Beyond the ‘sext’: Technology-facilitated sexual violence and harassment against adult women’ (2015) 48(1) Australian and New Zealand Journal of Criminology 104, 104.

[104] Graham M Vaughan and Michael A Hogg, Introduction to Social Psychology (Pearson Education, 3rd ed., 2002), 650.

[105] Flynn et al (n 2) 13.

[106] Ibid.

[107] Ibid.

[108] Danielle Keats Citron, ‘Law's Expressive Value in Combating Cyber Gender Harassment’ [2009] MichLawRw 8; (2009) 108 Michigan Law Review 373, 403.

[109] Anna Chalton and Paul Schollum, Sentencing Image-Based Sexual Abuse Offences in Victoria (Report, October 2020) 36—7.

[110] Ibid 36.

[111] Ibid.

[112] Delfino (n 31) 933.

[113] Nicola Henry, Asher Flynn and Anastasia Powell, ‘Image-Based Sexual Abuse: Victims and Perpetrators’ (2019) 572 Trends and Issues in Crime and Criminal Justice 1, 9.

[114] O’Halloran (n 8) 5.

[115] Gosse and Burkell (n 48) 507.

[116] Ibid 508.

[117] Danielle Keats Citron, ‘Criminalizing Revenge Porn’ (2014) 49 Wake Forest Law Review 345, 361.

[118] Henry, Flynn and Powell, ‘Policing image-based sexual abuse’ (n 49) 572—3.

[119] Ibid.

[120] Citron, ‘Law's Expressive Value in Combating Cyber Gender Harassment’ (n 108) 402.

[121] Henry, Flynn and Powell, ‘Policing image-based sexual abuse’ (n 49) 573.

[122] Ibid 574.

[123] Delfino (n 31) 927.

[124] Ibid 902.

[125] Ibid.

[126] Henry, Flynn and Powell, ‘Policing image-based sexual abuse’ (n 49) 570.

[127] McLoughlin and O’Brien (n 64) 2.

[128] Ibid 2.

[129] Ibid 6.

[130] Delfino (n 31) 933.

[131] Ibid 334.

[132] ‘Deepfake trends and challenges – position statement’, eSafety Commissioner, (Web Page, 23 January 2022) <https://www.esafety.gov.au/industry/tech-trends-and-challenges/deepfakes>.

[133] Henry, Flynn and Powell, ‘Policing image-based sexual abuse’ (n 49) 576.

[134] Delfino (n 31) 934.

[135] O’Halloran (n 8) 11.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/UNSWLawJlStuS/2022/25.html