AustLII Home | Databases | WorldLII | Search | Feedback

University of New South Wales Law Journal Student Series

You are here:  AustLII >> Databases >> University of New South Wales Law Journal Student Series >> 2019 >> [2019] UNSWLawJlStuS 6

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Bittar, Kate --- "To What Extent Would The Distribution Of Non-Consensual Pornography In Australia Be Effectively Addressed By The Australian Law Reform Commission's Proposed Tort For Serious Invasions Of Privacy?" [2019] UNSWLawJlStuS 6; (2019) UNSWLJ Student Series No 19-06


TO WHAT EXTENT WOULD THE DISTRIBUTION OF NON-CONSENSUAL PORNOGRAPHY IN AUSTRALIA BE EFFECTIVELY ADDRESSED BY THE AUSTRALIAN LAW REFORM COMMISSION’S PROPOSED TORT FOR SERIOUS INVASIONS OF PRIVACY?

KATE BITTAR

This essay will consider the role of online intermediaries in regulating the distribution of non-consensual pornography in Australia and their liability for the resulting harm under the Australian Law Reform Commission’s (‘ALRC’) proposed tort for serious invasions of privacy. The non-consensual distribution of intimate images is a form of online shaming. The resulting harm therefore increases as the content disseminates further across online networks. The ALRC has claimed that their proposed statutory cause of action for serious invasions of privacy is adaptable to emerging technology and unforeseen examples of privacy invasion. However, it has failed to tailor the remedy to the wrongdoing by restricting online intermediary liability and centring punitive focus on the original perpetrator.

I INTRODUCTION

This essay considers the role of online intermediaries in regulating the distribution of non-consensual pornography in Australia and their liability for the resulting harm under the Australian Law Reform Commission’s (‘ALRC’) proposed tort for serious invasions of privacy.[1] Intimate image abuse is an instance of online sexual violence which is rooted in abjection and intends to incite public shame. The resulting harm therefore increases as the images and information continue to circulate through information networks. The common law as it has developed in Australia has not developed a cohesive theory to determine when an intermediary whose platform has enabled or hosted wrongful behaviour such as intimate image abuse will be liable. As the attempt to manipulate intimate image abuse claims into distinct bodies of civil law has provided few remedies for victims, the ALRC’s successive recommendations for the introduction of laws specifically targeting serious invasions of privacy is a positive step towards consistency and certainty in the law.

The ALRC propose that a court may find an online intermediary to have ‘intended an invasion of privacy, or been reckless, if they know that their service has been used to invade someone’s privacy, and they are reasonably able to stop the invasion of privacy, but they choose not to do so.’[2] This represents a shift in the role of causal responsibility from the individual perpetrator to the online intermediary, implying that an online intermediary who is not morally responsible for a wrong they have knowledge of should nevertheless be expected to do something to prevent it.

However, while the ALRC have claimed that the tort is adaptable to emerging technology and unforeseen examples of privacy invasion, the likelihood of an online service provider or general-purpose intermediary having the requisite mental element to satisfy its operative test is extremely low. This essay will argue that the ALRC’s proposed tort of serious invasions of privacy will fail to leave plaintiffs of non-consensual pornography with an effective remedy as it does not confront the scope of the harm resulting from the digitisation, proliferation and storage of private information.

II NON-CONSENSUAL PORNOGRAPHY: THE NATURE OF THE HARM

The distribution of non-consensual pornography typically involves, first, the dissemination of sexually explicit media intended for exclusive viewing to a public online platform, and, second, information about how to identify and contact the targeted person. ‘Revenge porn’ is a popularised albeit misleading term for the wrongdoing. Indeed, the abusive act of publicly releasing intimate images is often described by perpetrators as one motivated by revenge and in these instances, the attacker is usually an ex-lover or otherwise known to the victim. However, intimate image abuse occurs in many and varied ways outside the context of an intimate relationship, such as in hacking exploits or ‘cloud-based leaks’[3], whereby private images stored through Apple iCloud accounts[4] or the Snapchat app[5] are accessed and distributed to the public at large. In such cases, the attackers are not known to the victim and the act is less vengeful as it is intended to, for example, incite amusement or derision or for financial gain in cases of blackmail. The term ‘revenge’ presupposes wrongdoing on the victim’s part – that the abusive act occurred reflexively in response to a perceived wrong. This is a victim-centric framework through which to describe and understand an abusive act. Taking a sexually explicit selfie or having such a photograph taken of you by someone else, should not be understood as causally relevant to its non-consensual distribution. With these considerations in mind, the wrongdoing is more appropriately described as ‘the non-consensual sharing of intimate images’[6] or ‘intimate image abuse’.

The non-consensual sharing of intimate images occurs within the broader context of domestic violence, where both race and gender are added vulnerabilities. In May 2017, RMIT University published research entitled Australians’ Experiences of Image-Based Abuse which identified that while women and men are equally likely to report being a victim, women are more likely than men to fear for their safety due to image-based abuse and perpetrators are most likely to be males who are known to the victim.[7] It was also found that one in two Indigenous Australians report image-based abuse victimisation.[8] Digital technology is used as a weapon in domestic violence cases to control, intimidate and harass victims for two primary reasons. First, physical distance is not an inhibiting factor: the harm can occur even when the relationship is over or where the victim and perpetrator live in different states. Second, technology can facilitate harassment with ease and speed: unwanted contact can be made via phone, text, email and social media and cyberstalking can occur through tracking technologies and spyware.

A third motivation arises in the case of intimate image abuse. Langlois and Slane argue that the material is uploaded online and accompanied by information about how to identify and contact the victim, in order to ‘ensure that the victim is interpellated as a target, not only of comments by other users of the site, but also by way of ongoing surveillance and often direct contact by people online and off.’[9] Public shaming, online and offline, are both modes of social control aimed at ostracism. However, as Langlois and Slane continue, it is the veracity and verifiability of the information that characterises online public shaming in contrast to offline shaming (e.g. defamation in the form of hearsay, gossip or spreading rumours). Where victims of intimate image abuse are easily identifiable, they have reported exposure to stalking[10] and unsolicited messages from strangers that taunted and sometimes threatened them with sexual violence.[11] Loss of professional and educational opportunities have also been reported,[12] as naming the targeted person increases the likelihood that the images will be exposed and accessible to people in the victim’s offline life (e.g. employer, current partner or family members).

An individual’s socioeconomic worth (e.g. possibilities for employment and social success) is increasingly controlled by the image of oneself as a subject that circulates online. A critical element of online abuse lies, therefore, in its temporality and scope: when somebody is shamed online, the harmful content becomes indexed and searchable to an increasingly large audience. Shaming circulates through information networks via sharing, up-voting, instant messaging, copying and pasting. The resulting harm is therefore ‘not simply about the effects of an action at a specific time and place, but also about the reverberations of an action throughout information networks.’[13] Enforcing the law against primary defendants to remedy such harm is overly burdensome and restrictive. The global nature of online networks and the rise of user-generated content means that perpetrators may be numerous, outside the victim’s jurisdiction or simply shielded by anonymity. Moreover, even if a cause of action does exist against the person initially responsible for uploading the image, the relevant harm (the continual distribution of the media by third parties) is still not addressed.

To effectively mitigate the harm caused by intimate image abuse, the visibility of the media online must be reduced. It is generally not possible to completely eradicate or suppress content posted online.[14] However, ensuring that intimate images ‘do not prominently feature in search results for a victim’s name... [are] not spread within the most popular or relevant social networks... and attempting to regulate the most influential sites that host and distribute content online’[15] are all powerful means to minimise the harm. It follows that if victims of leaked intimate images are to have an effective remedy, the target for liability must shift to online intermediaries who host and index the content.

III THE AUSTRALIAN LAW REFORM COMMISSION’S PROPOSED TORT FOR SERIOUS INVASIONS OF PRIVACY

To have a sexually explicit image of oneself distributed online without consent is an invasion of privacy – it is an intrusion into and misuse of private information. The ease with which a person’s privacy may be invaded is increasing due to the advent of digital technologies, but Australian civil law has not reformed to prevent and remedy this harm. There is no general individual right to privacy in Australia,[16] a cause of action for serious invasion of privacy does not presently exist in Australian law,[17] nor is there any cohesive theory to determine when an online intermediary who creates a system or platform enabling a privacy invasion could be held liable for the harm. It has been advanced that bodies of civil law like defamation and copyright may offer relief for victims of intimate image abuse seeking to hold intermediaries liable for the intrusion.[18] However, the common law is reluctant to find liability without fault.[19] A cause of action against an online intermediary will therefore be difficult to establish if that intermediary does not exercise sufficient editorial control over content, or is a ‘mere conduit’ in the publication and/or dissemination process.[20]

In defamation law, an intermediary involved in the chain of publication may accrue liability if they exercise some degree of control over the content they distribute[21] and fail to remove defamatory material posted by users once they have knowledge of it.[22] However, as truth is an unqualified defence in all Australian states,[23] publication of the intimate image alone without accompanying defamatory material will not justify a cause of action for defamation. For a successful claim in copyright law, the victim must prove that an online intermediary ‘authorised’ the doing of an act comprised in the copyright of a work (intimate image).[24] This would require the victim to be the author of the photo (i.e. the leaked image must be a selfie) and the intermediary to have both the power to prevent the infringement and knowledge or a reason to suspect that the infringement has taken place.[25] The distinction between an intermediary engaging in ‘active’ publishing or ‘passive’ facilitation is not always clear. Peer-to-peer sites, content hosts, social media, e-commerce and search engines will have varying involvement with the content they host and will accrue liability differently as a result. The lack of unifying framework for liability is failing to provide courts, victims of online shaming and intermediaries with legal certainty or adequate guidance for acceptable conduct.

The Australian Law Reform Commission (‘ALRC’) have claimed that their proposed tort for serious invasions of privacy will introduce privacy legislation to Australia which is adaptable to emerging technology and unforeseen examples of privacy invasion.[26] The Standing Committee on Law and Justice’s Report, Remedies for the Serious Invasion of Privacy, advocated for the introduction of the ALRC’s proposed tort, advancing the view that ‘the lack of a cause of action that is specifically designed to respond to the harm arising from a serious invasion of one’s privacy has resulted in awkward attempts to manipulate privacy claims into other actions that are not intended for that purpose.’[27] It is clear that currently in Australia, attempts to manipulate intimate image abuse claims into distinct bodies of civil law can provide few remedies for victims. The introduction of laws specifically targeting serious invasions of privacy is overdue and the ALRC’s proposed tort would be a positive step towards consistency and certainty in the law.

The operative test of the proposed tort is that a person ‘intentionally or recklessly commits a serious invasion of the privacy of another by, inter alia, disseminating their private information.’[28] The ALRC also suggest that intermediaries may be liable under the tort. [29] Intermediary liability, if successfully established, would be effective against Australian online service providers who ‘actively solicit others to leak images.’[30] If found liable for the harm, an intermediary will be ordered to destroy or remove the material.[31] This is encouraging as the proposed tort could, conceivably, provide a victim within Australia with a legal mechanism requiring online intermediaries to remove inflammatory content and potentially links to content posted by third parties.[32] However, as will be discussed, the inadequacy of the action lies in its failure to attribute liability to individuals (including corporations and online intermediaries) who negligently committed a serious invasion of privacy. Circumstances in which an Australian online service provider or general purpose intermediary would meet the requisite mental element of intention for liability would be rare. If victims of intimate image abuse are to find redress in a civil action, responsibility must shift to intermediaries with the power to regulate, or at the very least restrict access to, instances of invasions of privacy in networked spaces.

A Guiding Principles

The ALRC identified nine principles to guide their recommendations for statutory reform. The ninth principle states ‘privacy protection is an issue of shared responsibility.’[33] The commission explains, ‘the exercise of personal responsibility should be encouraged, where possible... capable adults should be encouraged to take reasonable steps to use the privacy tools offered by service providers.’[34] Women’s Legal Services NSW (WLS), Domestic Violence Legal Services (DVLS) and the North Australian Aboriginal Justice Agency (NAAJA) expressed concern at the inclusion of such a principle in the context of domestic and/or family violence – contexts in which intimate image abuse often occurs.

In relationships defined by an imbalance of power, coercive tactics and controlling behaviours, ‘personal responsibility’ is often insufficient to ensure that privacy is protected. As described by WLS, a victim of domestic and/or family violence may not be able to change the password on their computer, because, for example, their ex-partner has control of the computer and finances. Personal responsibility should be an irrelevant consideration in situations where victimisation is, in part, symptomatic of a broader lack of control.

A principle of ‘shared responsibility’ should instead emphasise the importance of education and training around existing civil laws. DVLS has noted that clients of their services have initially approached Police for help in cases of intimate image abuse, in possession of hard evidence either of the texts or the uploaded material.[35] All victims were told that because they had consented to the making of the material, there was nothing Police could do, when in fact redress could likely be sought against individual perpetrators under section 474.17 of the Criminal Code Act 1995 (Cth) “Use Carriage Service to Menace Harass or Cause Offence.”[36] Any contemplation of new legislation should be guided by the effectiveness of the existing framework. Evidently, the principle of ‘shared responsibility’ should extend to better training of Police, ‘on the impact on victims of these crimes, how these crimes can constitute a form of domestic violence, and on the available civil law remedies to help ensure victims get a better response.’[37]

B ‘Reasonable Expectation of Privacy’: Factors for Consideration

Recommendation 6–2 of the ALRC’s Report states ‘[t]he Act should provide that, in determining whether a person in the position of the plaintiff would have had a reasonable expectation of privacy in all of the circumstances, the court may consider, amongst other things: (a) the nature of the private information, including whether it relates to intimate or family matters... (h) the conduct of the plaintiff, including whether the plaintiff invited publicity or manifested a desire for privacy.’[38]

The ALRC clarified that ‘intimate’ matters will often be sexual matters and are therefore widely considered to be private.[39] The inherently invasive nature of accessing and viewing sexually explicit imagery is a constructive factor for consideration. However, it is somewhat marred by 6–2(h) – ‘whether the plaintiff invited publicity or manifested a desire for publicity’ are essentially subjective questions of consent. In cases of domestic, sexual and/or family violence, consent may not always be free and informed. It should also be clarified that consent must be obtained for each action. In circumstances where, for example, a woman consented to the creation and/or distribution of a sex tape but later wants it taken down, consent must be taken to be revoked. If a woman freely consents to having sexually explicit photographs taken of her in the context of an intimate relationship, further consent is required before sharing the images with a third party.

The harm resulting from intimate image abuse is the amalgam of each instance of distribution and the dissemination of media online is incredibly difficult to contain. For legislation to effectively adapt to the scope of the harm resulting from the digitisation, proliferation and storage of private information, there should be a high threshold of what constitutes ‘inviting’ or ‘manifesting a desire for’ publicity. Once intimate information is uploaded online, an individual may lose ‘the right to silence on past events in life that are no longer occurring.’[40] This is essentially a right to dignity and self-determination – the right to have information about yourself deleted or removed from public view is a fundamental identity interest. As advanced by Székely, the infallibility of the ‘total memory’ of the Internet contrasts with the limits of human memory.[41] An individual’s capacity to make choices, to take informed decisions and keep control over certain aspects of their life is progressively diminished as the content circulates. It is therefore critical that a statutory cause of action for privacy engages with the intersection between privacy and identity interests in the digital age.

The need to acknowledge context is also evident in 6–2(b), recommending the consideration of ‘the means used to obtain the private information or to intrude upon seclusion, including the use of any device or technology.’[42] This consideration should be irrelevant in contexts where private information was obtained by accessing a partner or ex-partner’s account through a password known from the relationship, auto-filled or saved on a device, pre-logged in, guessed or hacked. It is not uncommon to have shared knowledge of private information such as passwords in relationships, irrespective of whether a problematic power imbalance exists. However, a victim should not be prevented from having action under the proposed tort for further wrongdoing aided by shared-knowledge like passwords or passcodes.

C The Scope of the Cause of Action

The ALRC’s proposed tort applies to two categories of serious invasions of privacy: intrusion into seclusion or misuse of private information. The former refers to intentional invasions into a person’s private physical space. The ALRC stated that ‘[w]atching, listening to and recording another person’s private activities are the clearest and most common examples of intrusion upon seclusion.’[43] The latter refers to cases in which there has been an unauthorised disclosure or wrongful obtaining of a person’s private information.

There is a wide scope for actions that could constitute intimate image abuse: the media may have been taken without the knowledge of the victim; the victim may have sent private nude or sexually explicit media to another, intended for exclusive viewing; personal information may be annexed to the media when leaked; the media may be leaked to a single online intermediary and disseminated across others through shares and instant messaging; the media may be leaked to individuals via email or text message. The tort’s utility as a tool to protect victims of intimate image abuse will therefore be severely limited if the definition of ‘private information’ does not provide for nude, partially nude and sexually explicit images of a person, whether or not such images were made by the subject or by another person and whether or not knowledge or consent of the subject were obtained.

A cause of action for ‘unauthorised disclosure’ would be of critical significance for victims of intimate image abuse. The concept of ‘unauthorised disclosure’ has developed significantly in the United Kingdom, where there is a specific cause of action for the offence. In Campbell v MGN Ltd, Lord Hoffman noted in regards to the cause of action, ‘instead of the cause of action being based upon the duty of good faith applicable to confidential personal information and trade secrets alike, it focuses upon the protection of human autonomy and dignity – the right to control the dissemination of information about one’s private life and the right to the esteem and respect of people.’[44] The notion of informational autonomy is highly relevant to any consideration of the non-consensual sharing of intimate images. Informational autonomy is derived from the right to privacy, but specifically refers to the individual’s right to determine which information about themselves will be disclosed, to whom and for what purpose. Loss of this control, as is evident in intimate image abuse, has a profoundly negative effect on one’s subjectivity – that is, how one perceives oneself and is perceived in relation to the world.[45]

Advancements in digital public communications have allowed individuals to express themselves and disclose information, pictures and other media, frequently and unhesitatingly. This type of online expression ‘does not vanish but, on the contrary, remains continuously available to the public or to a certain part of the public long after it has been made.’[46] As noted, an individual’s right to self-determination is therefore denied when they lose control over how their selves are constructed and permanently archived online. As Langlois and Slane note, ‘with networked communication processes, shaming tactics become ways of exerting social power through sexual and informational violence.’[47] If the ALRC’s proposed tort effectively acknowledges the way in which emerging technologies have changed the nature and scope of privacy invasion as well as individual subjectivity, it will offer the necessary flexibility for victims of intimate image abuse to seek individualised justice.

D Fault Element

The ALRC recommended that the action should be confined to intentional or reckless invasions of privacy. It did not support an extension to conduct that is negligent. In justification of this proposal, the commission explained ‘it may be quite common to intend an action that will have the consequence of invading someone’s privacy, without intending to invade their privacy.’[48] If the tort does not extend to negligence, then liability will only arise where the defendant has acted intentionally. This has negative implications for the ALRC’s claim that intermediaries may be liable under the tort. Intermediary liability, if successfully established, would only be effective against Australian online intermediaries who intentionally engage in the non-consensual distribution of sexually explicit media, i.e. they must actively solicit the leak. The likelihood of an online service provider or general purpose intermediary having the requisite mental element to satisfy this test is extremely low.

In response to the ALRC’s proposal, the Australian Privacy Foundation submitted that ‘there is no sound legal or policy basis for limiting the scope of the action to either intentional or reckless acts rather than incorporating negligent acts.’[49] Similarly, Dr Witzleb, in support of a cause of action capturing negligent invasions, noted ‘... there are cases where privacy invasions have been committed negligently but they still have very serious consequences and people that would be the victim of those privacy invasions would be without redress if the invasion depended on the defendant having acted with intentional recklessness.’[50] The reluctance of the ALRC to extend the fault element of the tort to negligent invasions reflects the common law’s hesitance to impose obligations on institutions or individuals to protect the rights of another against harm caused by third parties.[51] This notion is commonly expressed in the rule that there is no general duty to rescue.[52] In response to the fault element, NSW Young Lawyers suggested that fault standards should apply differently to corporations and governments than to individuals.[53] Two primary arguments were advanced in favour of the widening of the fault element. First, corporations and governments are more likely to have the resources to access sophisticated equipment and technology to obtain, maintain and circulate private material. Consequently, a breach, whether negligent or intentional, could have exponentially broader consequences for a victim. Second, corporations have greater capacity than individuals to implement additional safeguards designed to prevent negligent invasions of privacy.

Dr Witzleb advanced a similar view, noting that the protection offered by distinctions in fault ‘provides a deterrent against corporate carelessness, while dealing with the concern that imposing liability on individuals for simple lack of care may have undesirable consequences.’[54] The overarching proposition here is that if corporations have greater power, resources and, importantly capacity to avoid negligent breaches of privacy, they have a subsequently greater responsibility to avoid them. This view is reflective of an increasing push to ensure that the social environments we inhabit online reflect norms of socially acceptable conduct offline. Controversies over the responsibility of intermediaries to help enforce the law and uphold social norms derive from a need for networked spaces that are free from injustice, harassment and discrimination.

E Limitation Periods

The ALRC recommended that a person should not be able to bring an action under the new tort after the earlier of: ‘(a) one year from the date on which the plaintiff became aware of the invasion of privacy; or (b) three years from the date on which the invasion of privacy occurred.’[55] The proposal of a limitation period that runs from the date a plaintiff became aware of the invasion of privacy is in line with current laws allowing an extension by up to one year after an applicant becomes aware of ‘any of the material facts of a decisive character relating to the cause of action which were not within the means of knowledge of the applicant.’[56] This is unproblematic. The absolute expiration of the limitation period three years from the date on which the invasion of privacy occurred, however, could prove detrimental given the various barriers that victims of domestic, family and/or sexual violence can experience in reporting matters such as intimate image abuse.

The following WLS case study highlights this issue: ‘Annabelle[57] discovered a sex video that was non-consensually filmed of her on the internet in 2014. The film was posted in 2007. Annabelle sustained a serious psychological injury as a result and was unable to continue to work.’[58] Annabelle would not be able to claim under the proposed tort due to the time limit. The facts of this case study reinforce the misleading nature of the term ‘revenge porn.’ ‘Revenge’ implies that the perpetrator will always aim to induce psychological distress in the victim by notifying them that the content has been leaked. It is not reasonable to assume that a victim will always be made aware of the transgression. The Internet is a vast landscape of networks and it is conceivable that a victim would not find the relevant content if their personal information (e.g. name, phone number, address) was not included in the post and if they were not actively seeking it. The Victims’ Rights Support Act allows for an extension of time or the imposition of longer time limits for matters involving domestic violence and sexual assault.[59] The ALRC should adopt a similar policy, in acknowledgment that the circumstances in which intimate image abuse occur are many and varied and applicants may not have been notified, or have reason to suspect, that the material has been posted.

F Conditional Liability: Safe-Harbour Schemes and Notice-and Takedown Procedures

The ALRC recommended against the introduction of a safe harbour scheme, proposing that where an intermediary has knowledge of an invasion of privacy committed by third parties using their services, there is no justification to provide a complete exemption from liability.

However, the recommendation against a safe harbour scheme is perhaps immaterial. There are three primary reasons why intermediaries are unlikely to be liable under the proposed tort. First, the terms ‘invasion’, ‘intrusion’ and ‘misuse’ imply positive conduct on the part of the defendant. An intermediary’s failure to stop an invasion of privacy by a third party is merely an omission to act and will therefore be inadequate. Secondly, as discussed, the fault element is confined to intentional or reckless invasions. An intermediary does not acquire the intent of the third party using their service to enact an invasion of privacy. If an individual leaks intimate images to a social networking platform, the operators of that platform cannot be said to ‘intend’ to invade someone’s privacy, simply because they provide the medium through which the invasion has occurred. Thirdly, the imposition of liability through a civil cause of action will fall short of covering major foreign intermediaries. Content hosts based in the US, for example, already benefit from safe harbour legislation, providing ‘immunity from almost all potential legal consequences that flow from the actions of their users, even for content that they encourage people to post and refuse to remove.’[60]

An applicant may establish that an Australian intermediary has intended an invasion of privacy, or been reckless, ‘if they know that their service has been used to invade someone’s privacy, and they are reasonably able to stop the invasion of privacy, but they choose not to do so.’[61] This principle is adapted from the UK case Byrne v Deane, whereby the proprietors of a golf club were found liable for defamation, when someone anonymously posted a defamatory poem to the wall of the club. The club knew the defamatory poem was posted on the wall and could have taken it down, but did not.[62] The principle has since held online intermediaries liable as publishers in defamation where ‘they have been given notice of defamatory matter present on their website but fail to remove it within a reasonable time.’[63]

This describes a legal notice-and-takedown complaints procedure, a form of conditional liability which has been widely critiqued. Notice-and-takedown procedures encourage self-censorship by placing the intermediary in quasi-judicial position, responsible for evaluating the legality of content.[64] The model will often lack elements of due process, such as the opportunity to appeal a takedown request. Intermediaries are therefore incentivised to remove content immediately after receiving a notice, in lieu of expending resources to investigate the validity of the request or risk a lawsuit. For these reasons, a notice-and-takedown scheme is a necessarily reactive policy which is susceptible to abuse. The burden is placed upon the victim to locate the source(s) of distribution because few intermediaries are willing to expend the resources required to proactively monitor, edit or control the content submitted by their users. It confronts ‘the symptom of the problem, not its cause’[65] and does not actively stop the non-consensual distribution of intimate images from occurring in the first or subsequent instances.

IV CO-REGULATION BETWEEN ONLINE INTERMEDIARIES AND GOVERNMENT AGENCIES

Under the common law, an individual will generally only be responsible for harm where his or her actions caused the harm and where that person might have acted to avoid the harm, but did not. As Denton has noted, ‘the common law of private obligations does not impose affirmative duties simply on the basis of one party’s need and another’s capacity to fulfil that need.’[66] To require an intermediary to help a victim seeking redress for an invasion of privacy, purely because it has the capacity to do so, would deviate from the common law’s emphasis on personal responsibility and reject the principle that ‘the burden to repair is proportional to the defendant’s responsible role in the occurrence of the harm.’[67]

Laws are not the only, or the most effective, source of online content restriction. In some jurisdictions, systems to set and enforce rules for online expression are designed in collaboration between public and private authorities. ‘Self-regulation’ describes action taken by individual companies, ranging from ‘measures taken by the company to block or remove spam and viruses, to the setting and enforcement of ‘terms of service’, which are rules that users must agree to abide by in order to use the service.’[68] For example, the Terms of Service of Facebook,[69] Twitter,[70] Reddit[71] and Google’s revenge porn removal policy[72] now expressly prohibit the leak of non-consensual pornography on their sites, relying on a system of user-based reporting for enforcement. ‘Co-regulation’ describes private regulation that is actively encouraged or supported by the state through legislation, funding or institutional participation. Regulatory approaches present significant benefits over intermediary liability schemes. Intermediary liability provisions codify government expectations for how an intermediary must handle ‘third party’ content, communications or behaviour. Currently in Australia, these expectations are too ill-defined to be an appropriate or proportionate response to the harm which may result from intimate images leaked by a third-party. There is a risk that this might encourage Australian service providers or general purpose intermediaries to seek more favourable rules in other jurisdictions, such as those which benefit from safe harbour legislation. There is also a concern that if liability is conditioned upon knowledge, as was recommended by the ALRC for the proposed tort, there will be a strong incentive for intermediaries to remove content upon receiving an allegation without investigating its validity. This system would be susceptible to abuse by applicants and likely to over-block legitimate speech.

It has been argued that a co-regulatory approach, on the other hand, is ‘less likely to expose intermediaries to unmanageable legal risks... more likely to help secure compliance from foreign firms and less likely to inhibit investment in digital services.’[73] A notice-and-takedown scheme, for example, would be significantly improved if an administrative body with specialised knowledge liaised directly with intermediaries to investigate the validity of complaints and provide advice. An administrative body would operate in a similarly quasi-judicial capacity, but would relieve private platforms of the obligation to contemplate public interest concerns when making decisions about content restriction and removal.

It remains to be seen how effective a co-regulatory approach could be for victims of intimate image abuse in Australia. On the whole, however, co-regulatory experiments provide an important incentive for the operators of private networks to proactively self-regulate by implementing systems to review and prohibit abusive material.

V CONCLUSION

The non-consensual distribution of sexually explicit media is a form of both sexual and informational violence. Any recommendation for a legal remedy must acknowledge the diminishing extent to which individuals can control the image of themselves as a subject that circulates online. The ability to control how we are perceived by others has always been one of the key promises of the online world, from social media to personal blogs and cyber culture. However, our personal information is routinely collected and collated without our control, such as ambient information derived from user behaviour, information given by other users (e.g. a tag or picture), private entities (e.g. a credit rating company), or private information leaked without our consent. There is a subsequently strong need to balance the free dissemination of information with our personal right to self-determination.

The ALRC has claimed that their proposed statutory cause of action for serious invasions of privacy is adaptable to emerging technology and unforeseen examples of privacy invasion. However, it has failed to tailor the remedy to the wrongdoing by restricting online intermediary liability and centring punitive focus on the original perpetrator. The pervasive abuse in online networks, particularly that which is directed at women or minority groups, is a growing problem for internet governance. Co-regulation between administrative bodies with specialist knowledge and online intermediaries is a productive means to address the source of this harm. Providers should be encouraged to respond to harms perpetrated through their networks, and, in turn, work towards incubating a culture of consent.

BIBLIOGRAPHY

A Articles/Books/Reports

Australian Law Reform Commission, Serious Invasions of Privacy in the Digital Era, Report No 123 (2014) 59

Bar, Allon, Ellonai Hickok, Hae-in Lim, Rebecca MacKinnon, Fostering Freedom Online: The Role of Internet Intermediaries (A report prepared for UNESCO’s Division for Freedom of Expression and Media Development, 2014)

Calvert, Clay, ‘Revenge Porn and Freedom of Expression: Legislative Pushback to an Online Weapon of Emotional and Reputational Destruction’ (2014) 24 Fordham Intellectual Property, Media & Entertainment Law Journal 673

De Terwangne, Cécile, ‘The Right to be Forgotten and Informational Autonomy in the Digital Environment’ in Alessia Ghezzi, Angela Guimaraes Pereira and Lucia Vesnic-Alujevic (eds), The Ethics of Memory in a Digital Age: Interrogating the Right to be Forgotten (Palgrave MacMillan, 2014) 89

Denton, Frank E ‘The Case Against a Duty to Rescue’ (1991) 4(1) Canadian Journal of Law and Jurisprudence 101

Domestic Violence Legal Services and the North Australian Aboriginal Justice Agency, Serious Invasions of Privacy in the Digital Era (Submission, May 2014)

Henry, Nicola, Anastasia Powell, ‘Beyond the “Sext”: Technology-Facilitated Sexual Violence and Harassment Against Adult Women’ (2015) 48 Australian and New Zealand Journal of Criminology 104

Langlois, Ganaele, Andrea Slane, ‘Economies of Reputation: The Case of Revenge Porn’ (2017) 14 Communication and Critical/Cultural Studies 120

Larkin, Paul J Jr, ‘Revenge Porn, State Law, and Free Speech’ (2014) 48 Loyola of Los Angeles Law Review 57

Mason, Justice Keith, ‘Fault, Causation and Responsibility: Is Tort Law Just an Instrument of Corrective Justice?’ (2000) 19 Australian Bar Review 201

Milosevic, Tijana, ‘Social Media Companies’ Cyberbullying Policies’ (2016) 10 International Journal of Communication 5164

Pappalardo, Kylie, Nicolas Suzor, ‘The Liability of Australian Online Intermediaries’ [2018] SydLawRw 19; (2018) 40(4) Sydney Law Review 469

Ronson, Jon, So You’ve Been Publicly Shamed (Riverhead Books, 2015)

Rubinstein, Ira, ‘The Future of Self-Regulation is Co-Regulation’ (2018) The Cambridge Handbook of Consumer Privacy

Seignior, Bryony, Jennifer Singleton, Nicolas Suzor, ‘Non-Consensual Porn and the Responsibilities of Online Intermediaries’ [2017] MelbULawRw 16; (2017) 40(3) Melbourne University Law Review 1057

Standing Committee on Law and Justice, Remedies for the Serious Invasion of Privacy in New South Wales (Report, March 2016)

Womens Legal Services NSW, Serious Invasions of Privacy in the Digital Era Discussion Paper (Report, May 2014)

B Cases

Byrne v Deane [1937] 1 KB 818

Campbell v MGN Ltd [2004] UKHL 22; [2004] 2 AC 457

Roadshow Films Pty Ltd v iiNet Ltd [2012] HCA 16

University of New South Wales v Moorhouse [1975] HCA 26

Victoria Park Racing & Recreation Grounds Co Ltd v Taylor [1937] HCA 45; (1937) 58 CLR 479

C Legislation

Copyright Act 1968 (Cth)

Criminal Code Act 1995 (Cth)

Defamation Act 2005 (NSW)

Victims Rights Support Act 2013 (NSW)

D Other

Facebook, Community Standards (2019) <https://www.facebook.com/communitystandards>

Google, Remove Revenge Porn from Google (2019) <https://support.google.com/websearch/answer/6302812?hl=en>

Reddit, Reddit Content Policy (2019) <https://www.reddit.com/help/contentpolicy/>

Twitter, The Twitter Rules (2019) <https://support.twitter.com/articles/18311>


[1] Australian Law Reform Commission, For Your Information: Australian Privacy Law and Practice (Report No 108, 2008) vol 1, 88–90; Australian Law Reform Commission, Serious Invasions of Privacy in the Digital Era (Report No 123, 2014) 59–62 (‘ALRC Serious Invasions of Privacy Report).

[2] Ibid 208 [11.103].

[3] N Suzor, B Seignior and J Singleton, ‘Non-consensual Porn and the Responsibilities of Online Intermediaries’ [2017] MelbULawRw 16; (2017) 40(3) Melbourne University Law Review 1057, 3.

[4] Thomas Fox-Brewster, ‘Stealing Nude Pics from iCloud Requires Zero Hacking Skills: Just Some Youtube Guides’, Forbes (online, 16 March 2016) <https://www.forbes.com/sites/thomasbrewster/2016/03/16/icloud-hacking-jennifer-lawrence-fappening-apple-nude-photo-leaks/#20318bab75b3>.

[5] Rashid Razag, ‘Snapchat Hackers Release 100,000 Videos and Photographs Including Explicit Images of Children’, Evening Standard (online, 13 October 2014) <www.standard.co.uk/news/techandgadgets/snapchat-hackers/release-100000-videos-and-photographs-including-explicit-images-of-children-9790598.html>.

[6] Enhancing Online Safety (Non-consensual Sharing of Intimate Images) Bill 2017.

[7] N Henry, A Powell and A Flynn, ‘Not Just Revenge Pornography: Australian’s Experience of Image-Based Abuse: A Summary Report’ (RMIT University, May 2017) 3–4.

[8] Ibid.

[9] Ganaele Langlois and Andrea Slane, ‘Economies of Reputation: The Case of Revenge Porn’ (2017) 14 Communication and Critical/Cultural Studies 120, 124.

[10] Paul J Larkin Jr, ‘Revenge Porn, State Law, and Free Speech’ (2014) 48 Loyola of Los Angeles Law Review 57, 65.

[11] Annmarie Chiarni, ‘I Was a Victim of Revenge Porn. I Don’t Want Anyone Else to Face This’, The Guardian (online, 20 March 2019) <https://www.theguardian.com/commentisfree/2013/nov/19/revenge-porn-victim-maryland-law-change>.

[12] Larkin (n 10) 65.

[13] Langlois and Slane (n 9) 121.

[14] Jon Ronson, So You’ve Been Publicly Shamed (Riverhead Books, 2015).

[15] Suzor, Seignior and Singleton (n 3) 10.

[16] Victoria Park Racing & Recreation Grounds Co Ltd v Taylor [1937] HCA 45; (1937) 58 CLR 479.

[17] ALRC Serious Invasions of Privacy Report (n 1) 1.

[18] Suzor, Seignior and Singleton (n 3) 25.

[19] Frank E Denton, ‘The Case Against a Duty to Rescue’ (1991) 4(1) Canadian Journal of Law and Jurisprudence 101, 109.

[20] Roadshow Films Pty Ltd v iiNet Ltd [2012] HCA 16.

[21] Wheelver v Federal Capital Press of Australia Ltd (1984) Aust Tort Reports 80-640.

[22] Byrne v Deane [1937] 1 KB 818, 837.

[23] Civil Law (Wrongs) Act 2002 (ACT) s 136; Defamation Act 2005 (NSW) s 26; Defamation Act 2006 (NT) s 23; Defamation Act 2005 (Qld) s 26; Defamation Act 2005 (SA) s 24; Defamation Act 2005 (Tas) s 26; Defamation Act 2005 (Vic) s 26; Defamation Act 2005 (WA) s 26.

[24] Copyright Act 1968 (Cth) s 13(2).

[25] University of New South Wales v Moorhouse [1975] HCA 26.

[26] ALRC Serious Invasions of Privacy Report (n 1).

[27] Standing Committee on Law and Justice, Remedies for the Serious Invasion of Privacy in New South Wales (Report, March 2016) [4.13].

[28] Suzor, Seignior and Singleton (n 3) 21.

[29] ALRC Serious Invasions of Privacy Report (n 1) 208 [11.103].

[30] Suzor, Seignior and Singleton (n 3) 21.

[31] ALRC Serious Invasions of Privacy Report (n 1) 251 [12.147].

[32] Suzor, Seignior and Singleton (n 3) 22.

[33] ALRC Serious Invasions of Privacy Report (n 1) 39 [2.47].

[34] Ibid.

[35] Domestic Violence Legal Services and the North Australian Aboriginal Justice Agency, Serious Invasions of Privacy in the Digital Era (Submission, May 2014) 6.

[36] Criminal Code Act 1995 (Cth) s 474.17.

[37] Domestic Violence Legal Services and the North Australian Aboriginal Justice Agency, Serious Invasions of Privacy in the Digital Era (Submission, May 2014) 6.

[38] ALRC Serious Invasions of Privacy Report (n 1) 96.

[39] ALRC Serious Invasions of Privacy Report (n 1) 99 [6.39].

[40] G Pino, ‘The Right to Personal Identity in Italian Private Law: Constitutional Interpretation and Judge-Made Rights’ in M. Van Hoecke and F Ost (eds), The Harmonization of Private Law in Europe (Oxford: Hart Publishing, 2000) 237.

[41] I Székely, ‘The Right to Forget, the Right to be Forgotten, Personal Reflections on the Fate of Personal Data in the Information Society’ in S Gutwirth et al (eds), European Data Protection: In Good Health? (Dordrecht: Springer, 2012) 347.

[42] ALRC Serious Invasions of Privacy Report (n 1) 96.

[43] Ibid 76.

[44] Campbell v MGN Ltd [2004] UKHL 22; [2004] 2 AC 457, [51].

[45] Langlois and Slane (n 9) 120.

[46] Cécile de Terwangne, ‘The Right to be Forgotten and Informational Autonomy in the Digital Environment’ in Alessia Ghezzi, Angela Guimaraes Pereira and Lucia Vesnic-Alujevic (eds), The Ethics of Memory in a Digital Age: Interrogating the Right to be Forgotten (Palgrave MacMillan, 2014) 89.

[47] Langlois and Slane (n 9) 121.

[48] ALRC Serious Invasions of Privacy Report (n 1) 115.

[49] Standing Committee on Law and Justice, Remedies for the Serious Invasion of Privacy in New South Wales (Report, March 2016) 61.

[50] Ibid.

[51] Kylie Pappalardo, Nicolas Suzor, ‘The Liability of Australian Online Intermediaries’ [2018] SydLawRw 19; (2018) 40(4) Sydney Law Review 469, 477.

[52] Denton (n 19) 101.

[53] Standing Committee on Law and Justice, Remedies for the Serious Invasion of Privacy in New South Wales (Report, March 2016) 63.

[54] Ibid.

[55] ALRC Serious Invasions of Privacy Report (n 1) 11.

[56] Limitations Act 1969 (NSW) s 58(2).

[57] Based on the experience of clients but not their real names.

[58] Womens Legal Services NSW, Serious Invasions of Privacy in the Digital Era Discussion Paper (Report, May 2014) 11.

[59] Victims Rights Support Act 2013 (NSW) ss 40(5), (7).

[60] Suzor, Seignior and Singleton (n 3) 23.

[61] ALRC Serious Invasions of Privacy Report (n 1) 208.

[62] Byrne v Deane [1937] 1 KB 818.

[63] ALRC Serious Invasions of Privacy Report (n 1) 208.

[64] Wendy Seltzer, ‘Free Speech Unmoored in Copyright’s Safe Harbor: Chilling Effects of the DMCA on the First Amendment’ (2010) 24(1) Harvard Journal of Law & Technology, 171.

[65] Suzor, Seignior and Singleton (n 3) 38.

[66] Denton (n 19) 101.

[67] Justice Keith Mason, ‘Fault, Causation and Responsibility: Is Tort Law Just an Instrument of Corrective Justice?’ (2000) 19 Australian Bar Review 201, 207–8.

[68] Rebecca Mackinnon et al, Fostering Freedom Online: The Role of Internet Intermediaries (Report, 2014) 55.

[69] Facebook, Community Standards (2019) <https://www.facebook.com/communitystandards>.

[70] Twitter, The Twitter Rules (2019) <https://support.twitter.com/articles/18311>.

[71] Reddit, Reddit Content Policy (2019) <https://www.reddit.com/help/contentpolicy/>.

[72] Google, Remove Revenge Porn from Google (2019) <https://support.google.com/websearch/answer/6302812?hl=en>.

[73] Suzor, Seignior and Singleton (n 3) 31.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/UNSWLawJlStuS/2019/6.html