AustLII Home | Databases | WorldLII | Search | Feedback

Privacy Law and Policy Reporter

Privacy Law and Policy Reporter (PLPR)
You are here:  AustLII >> Databases >> Privacy Law and Policy Reporter >> 2004 >> [2004] PrivLawPRpr 36

Database Search | Name Search | Recent Articles | Noteup | LawCite | Help

Waters, Nigel; Greenleaf, Graham --- "IPPs examined: The security principle" [2004] PrivLawPRpr 36; (2004) 11(3) Privacy Law and Policy Reporter 67

IPPs examined: The security principle

Nigel Waters and Graham Greenleaf

This is the first in a series which will examine the interpretations of similar information privacy principles (IPPs) by Courts, privacy and information Commissioners and other sources of interpretation, with an emphasis on legislation in Australian jurisdictions and caselaw from Asia-Pacific jurisdictions. (General Editor)

All privacy laws contain a security principle. There is clearly no point in having detailed rules about how personal information can be used and disclosed unless there is also an obligation to prevent unauthorised access. But the security obligation in privacy laws is also designed to protect against two other categories of risk: unauthorised use by authorised personnel, and loss or corruption of data.

For most individuals, damage or inconvenience from loss or corruption of data is probably more likely than unauthorised access or use, although the consequences of the latter could be more severe.


The security principles in Australasian privacy laws are all very similar in effect though there are superficial differences. The first – IPP 4 in the federal Privacy Act 1988 – has been the model for all the others. It reads:

“A record-keeper .... shall ensure that the record is protected, by such security safeguards as it is reasonable in the circumstances to take, against loss, against unauthorised access, use, modification or disclosure, and against other misuse ..” (PA[1] s.14).

The NSW and NZ laws contains an almost identical principle (NSW PPIPA[2] s.12(c) – IPP5; NZ PA[3] s.6 - IPP 5(a)).

The principle is simplified in the private sector NPPs, introduced into the federal Privacy Act in 2000:

“An organisation must take reasonable steps to protect the personal information it holds from misuse and loss and from unauthorised access, modification or disclosure.” (PA – NPP 4.1)

This formulation is also used in the Victorian Information Privacy Act 2000 (Vic IPA[4] IPP 4.1), and in the Northern Territory Information Act (NT IA[5] IPP 4.1).

These security principles do not explicitly include as security breaches actions which are authorised by the record controller but still improper (for example, alteration of a person’s record to frustrate an investigation). They only explicitly provide protection against ‘access, use, modification or disclosure,’ where it is unauthorised. However, both formulations include protection against ‘misuse’ or ‘other misuse’ without an expresss qualification that this can only occur through unauthorised acts. It can be argued that these words encompass authorised but improper access, use, modification or disclosure, because it is otherwise difficult to give them any effect. The principles do not say that security breaches must be ‘by someone else’. The alternative view, that the security principle only covers breaches ‘by someone else’ would provide a neater demarcation between the security principle and other IPPs. However, it is difficult to sustain this view because the references to ‘loss’ include destruction of personal information by the record-keeper or organisation itself. Privacy Commissioners have also taken the view that security must protect against those who have authorised access[6] .

‘Reasonable steps’ – sources of interpretation

A common feature of all the security principles in privacy laws is the qualification that the obligation is only to take ‘reasonable’ or ‘reasonably practicable’ steps – either expressly or implicitly related to the particular circumstances.

The guidance material issued by regulators offers advice on how to assess the ‘reasonable’ or ‘practicable’ level of security. The Federal and Victorian Privacy Commissioners’ Guidelines[7] emphasise the need for a risk assessment. So too do the NSW government security guidelines which also suggest a ‘baseline’ level of precautions, with extra measures to deal with particular risks[8] . The federal Privacy Commissioner suggests that relevant factors in assessing risk include:

• The sensitivity of personal information

• The likely harm that could result from a breach

• The medium of storage; and

• The size of the organisation (larger organisations tending to need greater security)

Hong Kong is the only jurisdiction to include some of these factors in the text of the security principle in its law[9] .

Organisations are understandably uneasy about such apparently subjective obligations and general advice. They will look ultimately to decisions of tribunals and courts for the standards required in different circumstances.

There are now a handful of decisions available which throw some light on what security measures might be held to be necessary. Examples of specific compliance measures considered by the regulators to be appropriate can also be found in the reports of conciliated cases published by some Privacy Commissioners, and in the reports of special investigations and audits conducted by those Commissioners who have those functions[10] . These are considered in the rest of this article.

Security is multi-faceted

The Australian Federal Privacy Commissioner makes a useful distinction between four different areas of security[11] :

physical security; computer and network security; communications security; and personnel security. Organisations need to pay attention to all four of these areas to meet their obligations under security privacy principles. It is self evident that any security system is only as effective as its weakest component.

Another dimension to be considered is the storage medium – similar personal information is often stored within an organisation on paper, in central computer databases and on individual employees’ workstations (including in Email) – all of these need to secured to an appropriate standard that avoids any ‘weak links’.

Security obligations are not absolute

No precautions can ever guarantee 100% security. There will always be clever individuals who can circumvent even the most elaborate security measures – whether in the physical or computer environments.

Nonetheless, organisations subject to privacy principles will be expected to have taken reasonable steps to secure personal information against ingenious unauthorised entry – whether to premises (breaking and entering) or to computer systems (hacking) – unless it could not have been reasonably anticipated. There are of course many other reasons, aside from privacy protection, why organisations put security precautions in place in relation to information. These include confidentiality of commercial matters and of government decision making processes, the need to ensure integrity of information for operational reasons, and conperns about physical security. The ‘reasonable’ security standard required by IPPs is the security necessary to protect personal information. The protection of commercial secrets or national security may justify higher security standards, but these would not seem to be the correct standards against which to judge whether an IPP has been breached.

Security objectives, whether for privacy protection or other reasons, are in constant tension with demands for accountability as expressed in Freedom of Information laws and corporate disclosure requirements. There are also clear tensions between convenience and security. User demands for ease and speed of access to information (including a person’s rights of access to their own record) are not easily reconciled with security. The standard of what is ‘reasonable’ security must not be so strict as to be inconsistent with these other objectives being achieved.

The role of security standards

The dominance of other objectives has also led to much of the computer software currently in use being, in the view of many experts, fundamentally flawed from a security perspective.[12]

Despite these tensions, the other reasons for taking security measures has led to a major industry, well established long before privacy protection was added to the list of justifications. Because of the existence of this established expertise, Privacy regulators have often deferred to general standards and guidelines on security. The Australian Federal Privacy Commissioner’s Information Sheet: Security (2001) includes a list of national and international security standards, as does the Victorian Privacy Commissioner’s Guidelines to the IPPs Part Two, August 2002, and the three part NSW Information Security Guidelines[13] . For any profession or activity where such well-established security standards exist, Courts are likely to interpret what constitutes ‘reasonable’ steps in IPPs in light of such standards.

While the mass of security guidance available is potentially very valuable if used selectively, there is a risk in deferring entirely to established security industry standards. This is because many of them focus on only two of the three categories of risk – ‘unauthorised access’ and ‘loss and corruption’. Traditional organisational security pays little attention to preventing or deterring ‘unauthorised use by authorised personnel’ – an internal threat. It is often assumed that if someone is entitled to access to information, what they do with the information is not a matter for physical or logical (computer) security. As noted above, all of the security IPPs are potentially broad enough to cover actions by ‘authorised’ persons as security breaches.

The OECD’s security Guidelines[14] , to which Australia states it adheres, are also relevant to interpreting security IPPs.

Access control minimum standards

While the appropriate level of security will of course depend on the risk, there are some minimum standards that should be obvious. Reasonable physical access controls will include door locks, with appropriate key management. Reasonable computer security should as a minimum include username and password/PIN controls for access to personal information. While it can be difficult to stop individuals using ‘obvious’ passwords or PINS, organisations could be held liable for making this too easy – many systems now require passwords/PINS to be of a minimum length and to contain prescribed features such as a mixture of alpha and numeric characters. Codes or numbers which are commonly known to third parties should not be used as passwords or PINS.[15] There must also be reasonable controls to stop third parties finding out a customer’s password or PIN. The Privacy Commissioner of Canada found that a telco had breached the security principle by allowing the PIN for a calling card to be retrieved by a last number recall function[16] .

The role of logging and audit trails

Physical security, and logical access controls such as username/password combinations cannot control what use someone makes of information to which they are entitled. However, systems design features such as a requirement to record reasons for access, together with access logs or audit trails, are an important tool in deterring inappropriate uses. If users know that their access to information is recorded, and that they can be held accountable, then they are less likely to make unauthorised use of personal information.[1718]

In E v Financial Institution [2003] PrivComrA 3, the Australian federal Privacy Commissioner found that the audit trail maintained by the respondent only recorded financial transactions, and not access to customers account information that did not involve an a transaction. The Commissioner concluded that as a result, the respondent “could provide only limited assurance that the information was protected from unauthorised access, misuse or disclosure.” The financial institution in question “agreed to establish an enquiry audit trail on the mainframe computer where customer information is stored so that staff accesses to customers’ personal information would be recorded regardless of whether a transaction is made on the account.”

Organisations will of course want to know if cost considerations will be taken into account. In FH v NSW Department of Corrective Services [2003] NSWADT 72, when considering what were ‘reasonable steps’, the Tribunal was equivocal as to whether the estimated high cost of ‘retro fitting’ a logging facility on the Department’s computer systems was a defence against an allegation of inadequate security, in breach of PPIPA s.12(c) – IPP 5. Despite finding that “the absence of arrangements to keep a record (a log) of who inside the administration is using the records, when and what for purpose” was a “significant continuing problem” the Tribunal appears to have accepted the respondents submission that installing such a facility would be prohibitively expensive. Observing that the extent to which any shortcomings need to be addressed depends on both the risk of intrusion and the gravity of the consequences of intrusion, the Tribunal found “There is no basis for concluding that any further action should be taken at present by the Department to meet the applicant’s concerns.”

This is a particularly disappointing decision in that the Tribunal made no effort to test the respondent’s assertions about the difficulty and cost of installing a logging facility, and does not appear to have made any comparison with the practice in other government agencies or private organizations. While it is understandable that there must be a practical limit on the amount an organization can be expected to pay for security, it cannot be satisfactory to leave the decision entirely to the organization, without any reference to contemporary standards.

Human security – training and enforcement

As well as logs and audit trails, the other main security measures that are effective against internal misuse fall into the category of personnel security, which encompasses both preventive measures such as appropriate (but not excessive) pre-employment vetting and training; and enforcement

Despite considerable education of users about confidentiality requirements and privacy laws, there continue to be abuses of access privileges. Since the early 1990s there have been a steady stream of reported cases (often concerning breaches of ‘computer crime’ laws[19] ) where public servants have used information to which they had legitimate access for unauthorised purposes. In government departments such as the Tax Office and Centrelink, where privacy laws are backed up by statutory secrecy provisions with criminal penalties, errant staff have been disciplined and in some cases prosecuted. Less satisfactory has been the response of Australian Police services to repeated instances of misuse by police officers and civilian employees – disciplinary action often seems to have been restricted to mild cautions – sending the wrong message about the gravity of the breaches.

The importance of training and internal communication of security measures was well illustrated by a case conciliated in 2003 by the Victorian Privacy Commissioner[20] . The complainant’s new address was disclosed by an agency employee ‘across the counter’ despite corporate knowledge that the individual was at risk and had specifically requested that her new address be kept confidential. Indeed a separate request for the information on the same day by the same third person, presumably by more formal channels, had been correctly refused in accordance with the organisation’s policy. This case highlights the problem of ‘weak links’ – in this instance an individual employee who was clearly not aware of the correct processes to ensure appropriate security. The outcome – a payment of $25,000 in compensation as well as a commitment to review procedures and communications – demonstrates again the potentially serious consequences of security breaches.

A similar reminder has been given by the NZ Complaints Review Tribunal in two cases involving unauthorised disclosure by a police officer.[21] In the absence of any evidence given by the Police service as to relevant security measures, the Tribunal found in both cases a prima facie breach of the security principle, ordering compensation of $10,000 in one case, while in the other there was an insufficient level of damage to amount to an interference with privacy.[22]

Organisations can obviously not be expected to guarantee compliance with instructions given to staff – individual employees will occasionally act wilfully and recklessly in contravention of clear instructions. This may result in the organisation being vicariously liable for the breach of another IPP by its staff member, but would not seem to be a breach of the security principle. Where this happens, however, organisations could be expected to reinforce training and where appropriate to take disciplinary action in order to maintain a reasonable system of security.[23]

The issue of pre-employment screening or vetting involves a balance between protecting the privacy of ‘customers’ on the one hand, and not unduly intruding into the privacy of prospective employees on the other. In a health information case, the NZ Commissioner considered the normal practice of checking a medical practitioner’s references, annual practising certificate and registration status to be ‘reasonable’ and therefore found no breach of the security rule of the Health Information Privacy Code.[24] Similarly the Privacy Commissioner of Canada found[25] that a nuclear power company was not acting unreasonably in requiring employees to consent to a security check (whether such a requirement would qualify as free and informed consent under the different laws is another issue).

Relationship between security and disclosure

Security breaches are often alleged as incidental to particular disclosures about which an individual complains. It will often be claimed that if a disclosure (or use) is found to be unauthorised or otherwise in breach of a use and/or disclosure principle, then it follows that there must have been a security breach as well. That this does not automatically follow is clear from the ‘reasonable steps’ qualification to the principles. No-one expects security to be absolute – even the best precautions are likely to be vulnerable to both human error and deliberate circumvention. Computer security is known to be a constant battleground between the clever hackers/crackers on the one hand and the security experts (often reformed hackers) on the other.

The prospect of inappropriate disclosures not necessarily involving a security breach is illustrated by AAB Appeal 4/00 in which the Hong Kong Adminstrative Appeals Board dismissed a complaint that newspaper publication of the complainant’s address, endangering him, was a breach of the security principle in the Hong Kong Ordinance[26] . It considered that only the disclosure principle was at issue.

In contrast, the only formal determination by the Australian federal Privacy Commissioner to deal with the security principle found a breach of IPP4 apparently automatically as a result of an unauthorised disclosure of details of an Army discharge.[27] No other reason is given for the finding, which was not contested[28] . The case did however highlight, relatively early in the operation of the federal Act, the potential for damage to result from inadequate security – the complainant was sacked by his new employer as a direct result of the disclosure. The Commissioner awarded compensation of $5000 – half for lost earnings and half for embarrassment.

We suggest that a disclosure will only involve a breach of the security principle if it could have been prevented had better security procedures been in place. The consequences of the disclosure will then be consequences of the associated security breach, and may result in compensation such as in the above example.

Breaches of the security principle by an organisation may also involve a breach of computer crime laws or similar crimes by the person whose actions have demonstrated the security weaknesses. A hacker may have breached computer crime laws (and be of inadequate means for a claim for compensation), but the organisation that has been hacked may have breached the security principle and will be a much better target for a compensation claim.

“Another important aspect of the relationship between the security and disclosure principles is that, while organisations can eliminate security liability by taking reasonable steps, when a breach does occur which results in disclosure it seems at first sight that the disclosure principle (eg NPP 2) imposes an absolute liability despite reasonable security procedures. Usually, this will be the case where unauthorised disclosure occures, and can be justified on the grounds that the organisation is better able to bear the loss than the individual. In other words, no matter what steps organisations take to improve security, they cannot remove disclosure liability.

However these is one gloss on this, in that the disclosure principle only applies when it is the organisation that discloses the information. Usually, where this happens there will also be a breach of the security principle, but in rare cases this could occur despite normally adequate security (eg if a completely unknown technical flaw in software causes an organisation to publish customer information on its website). In such cases it is the organisation that has published the information and is liable.

However, in the case of hackers extracting information from a site, it is hard to see that it is the organisation that is ‘disclosing’ the information. If it takes a willful criminal breach of normally reasonable security then perhaps the customer will have to bear the loss. This will also be a rare event, because hacking will normally exploit an inadequacy in security.”

“The position will sometimes be different in New Zealand, because s126(4) NZ PA provides that employers will not be liable for breaches by employees where they took ‘such steps as were reasonably practicable to prevent the employee from doing that act’. “

Careless disclosure

One of the most common security breaches is carelessness in delivering personal information. Examples of careless practice that have been highlighted in reported cases include:

• Failure to seal envelopes containing sensitive information, so that intermediaries (couriers, neighbours, other family members) are able to access and read the contents[29] .

• Putting material about one person in envelopes addressed to another person[30]

• Faxing personal information either to the wrong fax machine[31] , or to machines in common areas without taking steps to ensure the intended recipient is on hand to collect the pages[32] .

• Printing of sensitive personal information on envelopes[33] , or on correspondence visible through envelope windows (Note however that appropriate use of Window envelopes has been recommended by the NZ Commissioner as a security precaution.[34] )

It is however not unreasonable for organisations to rely on postal services, even though they are not faultless, and that incorrect delivery can sometimes lead to unauthorised disclosure. The Privacy Commissioner of Canada found that a bank’s reliance on first class mail for despatch of credit cards was not unreasonable – the complainant had felt that they should have used registered mail but the Commissioner disagreed[35] . A New Zealand case suggests that even wrongly addressed mail need not necessarily imply a failure of security.[36]

Another common security breach arises when documents containing personal information are accidentally mislaid or disposed of insecurely. Personal information often resides on computers which are lost or stolen – in 1995 sensitive personal information was contained on the hard drives stolen from the ACT Department of Education and Training. The Privacy Commissioner’s investigation concluded that while there was no evidence of anyone having accessed the information (the thieves were more likely to be interested in the value of the hardware), there had been a number of security failures. His report recommended improved building and computer security, a review of the need to keep sensitive information on local hard drives, and enhanced staff training.[37]

Careless disclosure can also arise from:

• failure to delete the details of third party individuals from documents provided under Freedom of Information legislation (this can and does apply to any release of information);

• use of ‘real’ personal information in training or in publications – such as when illustrating a point with a case study, or in providing ‘test’ databases for training or demonstrations[38] , and

• failure to ensure security for personal information ‘out of office’ or ‘out of hours’ – the Hong Kong Privacy Commissioner has served an enforcement notice[39] on a bank to implement appropriate policies and practices[40] .

• failure to provide reasonably confidential facilities for discussion with clients41.

Obligations when contracting services

Another recommendation of the Australian Federal Commissioner’s report into the theft from the ACT Department, mentioned above, which is of general application was the need for agencies to ensure that contracts with IT service providers contain appropriate clauses concerning privacy obligations. Given the prevalence of outsourcing of IT functions in particular, agencies need to accept that they cannot escape responsibility for privacy compliance just because the actual privacy breach was committed by a contractor. Some of the Security principles contain an express reminder of this – requiring agencies to ensure that reasonable steps are taken to prevent security breaches by contractors.[42] Some of the laws place responsibility more generally on agencies for any actions of contractors[43] , although under some laws contractors can also be investigated and held directly liable for breaches.[44]

The New Zealand Privacy Commissioner has found that a failure by a debt collection agency to ensure that sub-contracted process servers were aware of their privacy obligations led to an inappropriate disclosure, and that the failure constituted a breach of the NZ security principle IPP5.[45] In another case however, the Commissioner rejected a complaint about the use of courier for delivery, despite the facts that documents had been lost, finding that the use of a recognised courier service was in fact a reasonable security precaution.[46]

Programming errors and multiple breaches

There have been several well publicised incidents of mass mail-out errors by Australian federal government agencies, some of which have led to major investigations by the Federal Privacy Commissioner. In his reports, the Commissioner found that the agencies had not taken adequate steps to prevent the sort of systems errors that led to the mismatching of personal details such that letters intended for one person were sent by mistake to another client.[47]

Although these instances of bulk/multiple breaches would be fertile ground for representative actions under the federal Privacy Act, no such actions have yet been brought.

Access control must be managed

It is clearly not sufficient to have security measures in place if they are not implemented. In L v Commonwealth Agency [2003] PrivComrA 10, the agency failed to ask for a password that had been issued to a client, and as a result disclosed personal information about him to his ex-wife. The Commissioner found the agency in breach of IPP4 and the agency agreed to update its computer system to prompt for passwords.

Another case handled by the federal Commissioner[48] raised the question of whether an Internet Service Provider (ISP) had taken reasonable steps to implement password security – the complainant alleged that his estranged wife had been able to access his Internet account after several attempts despite his having changed the password. Unfortunately the Commissioner declined to investigate on the grounds that the complainant had apparently not taken the matter up first with the ISP in question. This case could have thrown useful light on what standards an ISP will be required to meet in relation to controlling access to customers’ accounts.

Most systems administrators would be aware of the need for regular password changes, and for revocation or change to access privileges for staff who leave or have changed function, but audits commonly find that these disciplines are not enforced. Similar obligations apply to management of physical access – for example the need to supervise after hours access by contractors, and to change key pad combinations and retrieve keys from departing staff.[49]

Loss of data

Even where there is no disclosure of personal information, practices which lead to unintended loss or destruction of such information can still breach of a security principle. Individuals can suffer as much if not more damage due to information they need no longer being available when it should be as they can through misuse or unauthorised release. A good example is provided by a NZ case in which a hospital erased a video tape which was the subject of a disputed access request then under investigation by the Privacy Commissioner – the Commissioner negotiated a $5000 payment in compensation.[50] Another simple example is the filing of facsimiles on thermal paper which fade over time[51] .

Special protection for sensitive information

Some privacy laws contain specific sensitive data principles which require additional measures to be taken in relation to certain types of information – typically health, criminal records, political views etc[52] . These principles generally deal with additional notification and consent requirements and are silent on security. But it remains implicit[53] in all the security principles that the sensitivity of the information is a factor to be taken into account in deciding on appropriate security.

The specific health privacy laws[54] which have been passed in some jurisdictions do not generally add any particular security obligations – the security principles in them simply restate the ‘reasonable steps’ requirement, leaving the standards to the judgement of the organisations holding health information. There do not appear to have been any cases involving these jurisdictions to date that add to our knowledge of the specific security measures that might be considered necessary when handling health information

One particular type of sensitive information is ‘silent’ or unlisted telephone numbers, which are often obtained because of a particular risk to the subscriber concerned. Two NZ cases have reinforced the need for particular care in securing unlisted numbers against unauthorised disclosure.[55] Canadian cases have highlighted the need for special protection for the Social Insurance Number (SIN)[56] and similar cases can be expected in those Australia jurisdictions that require special protection for government identifiers.


Privacy case law in several jurisdictions is gradually throwing some light on what constitute the ‘reasonable security measures’ required by privacy laws. Research for this article has only looked at a selection of the case law available, and further guidance may be available from other cases. As the body of case law builds up and is summarised, organisations can expect to obtain a clearer view of their obligations, both generally and in a variety of specific circumstances.

[1] Privacy Act 1988 (Cth) – abbreviated as ‘PA’ herein

[2] Privacy and Personal Information Protection Act 1998 (NSW) – PPIPA herein

[3] Privacy Act 1993 (NZ) - NZ PA herein

[4] Information Privacy Act 2000 (Vic) – Vic IPA herein

[5] Information Act (NT) – NT IA herein

[6] For example see E v Financial Institution [2003] PrivComrA 3 (logging required).

[7] Office of the Federal Privacy Commissioner, Guidelines to the National Privacy Principles, September 2001, pp 44-46; Office of the Victorian Privacy Commissioner, Guidelines to the Information Privacy Principles Vol 2, pp 9-10.

[8] See

[9] Hong Kong Personal Data (Privacy) Ordinance 1995.

[10] The Federal Privacy Commissioner has an express audit function in relation to public sector agencies, credit providers and tax file number recipients, although the audit program has been cut back drastically in recent years due to resource constraints. The Victorian Privacy Commissioner also has an audit function which he has started to exercise in accordance with an Audit Manual published in 2004. All Privacy Commissioners are able to conduct special investigations and make special reports, although the parameters vary between jurisdictions.

[11] OFPC Guidelines to the National Privacy Principles, September 2001, Guidelines to NPP4.

[12] The vulnerabilities of, for example, Microsoft Windows, is well documented, and security concerns have been one of the foundations of the open-source software movement.

[13] Most recently re-issued in 2003 – see

[14] See

[15] See [2003] PrivCmrCan PIPEDA Case Summary #146 – the Commissioner recommended the employer stop using the last four digits of employees Social Insurance Number (SIN) as the PIN for access to pay records – although surprisingly the security principle in PIPEDA was not cited.

[16] [2003] PrivCmrCan PIPEDA Case Summary #254

[17] Monitoring of employees’ communications (as opposed to their access to their employers data, does of course raise separate privacy issues. The appropriate limits of employee or workplace privacy is one of the main current privacy debates.

[18] See [2002] PrivCmrCan PIPEDA Case Summary #100 – a bank’s security was found to be adequate despite an unauthorised disclosure by an employee, in contravention of procedures and training

[19] Such as the Crimes Act 1914 (Cth) Part VIIB

[20] B v Victorian Government organisation – [2003] VicCmr 2

[21] See M v Police Commissioner, [1999] NZ CRT 17/99, and Proceedings Commissioner v Police Commissioner [1999] NZ CRT 23/99

[22] The New Zealand Privacy Act has a two part test for an interference with privacy – there has to be not only a breach of a Principle but also significant detriment.

[23] See [2001] PrivCmrNZ Case Note 16005

[24] [2001] PrivComrNZ Case Note 21451

[25] [2002] PrivCmrCan PIPEDA Case Summary #65

[26] See - Data Protection Principle 4 requires ‘practicable steps’ to guard against the same risks as the similar principles in Australasian laws.

[27] A v Dept of Defence – Privacy Commissioner of Australia Complaint Determination No 1, 1993. See

[28] The agency concerned wished to make an ex gratia payment but considered its legislation did not allow this.

[29] See HKPC ar9798-10; ar0203-6, and ar0203-7. Also [2003] PrivCmrCan PIPEDA Case Summary #154 in which the Commissioner checks to ensure that envelopes containing sensitive personal information are sealed.

[30] See [2002] PrivCmrCan PIPEDA Case Summary 28 – the bank in question agreed to institute a ‘double verification’ process in its mailroom

[31] See HKPC ar0102-5.

[32] See M v Cth Agency [2003] PrivComrA 1; [1999] PrivCmrNZ Case Note 13518, and [2003] PrivCmrCan PIPEDA Case Summary #226.

[33] See HKPC ar9798-17

[34] See [1998] PrivCmrNZ Case Note 2448 – the use of window envelopes eliminates the need to match contents to envelopes, reducing the type of risk highlighted in the Canadian case cited at footnote 26.

[35] See [2002] PrivCmrCan PIPEDA Case Summary #43

[36] [1998] PrivCmrNZ Case Note 14982

[37] Federal Privacy Commissioner Ninth Annual Report 1996-97 p.124

[38] This has been a common audit finding – see for example Federal Privacy Commissioner Ninth Annual Report 1996-97 p.95

[39] Under s.50 of the HK Personal Data (Privacy) Ordinance.

[40] HK PCO Newsletter August 2004 - Note that the Victorian Privacy Commissioner has a similar ‘compliance notice’ power (IPA s.44), which he has yet to exercise.

[41] See [1995] PrivCmrNZ Case Note 2594 – no breach in the particular case but the agency agreed to instal a private office.

[42] PA s.14, IPP 4(b); PPIPA s.12(d).

[43] PA s.8(1); IPA s.9(1)(j) and s.17 (an agency can expressly transfer the obligations by contract); PPIPA s.4(4)(b).

[44] Eg Privacy Act 1988.

[45] [1998] PrivCmrNZ Case Note 2663

[46] [1998] PrivCmrNZ Case Note 6983

[47] Errors of this nature were made by the Australian Taxation Office, the then Department of Social Security and the Department of Veterans Affairs in the mid 1990s, by the Department of Education and Training in 1995-96 (Privacy Commissioner Eighth Annual Report 1995-96 p.114) and by a mailing house acting on behalf of a credit union in September 1996 (Ninth AR p 100)

[48] N v Internet Service Provider [2004] PrivCmrA 10.

[49] Federal Privacy Commissioner Ninth Annual Report 1996-97 p.95 – common audit findings.

[50] [1995] PrivCmrNZ Case Note 3984

[51] Federal Privacy Commissioner Ninth Annual Report 1996-97 p.95 – common audit findings.

[52] See Privacy Act 1988 (Cth) NPP 10; Information Privacy Act (Vic) IPP 10; Privacy & Personal Information Protection Act 1998 (NSW) s.19(1).

[53] Explicit in the Hong Kong Ordinance – DPP 4(a).

[54] Health Records and Information Privacy Act 2002 (NSW; Health Records Act 2001 (Vic); Health Records (Privacy & Access) Act 1997 (ACT); Health Information Privacy Code 1994 (NZ)

[55] See [1997] PrivCmrNZ Case Note 10668 and [1994] PrivCmrNZ Case Note 0189

[56] [2002] PrivCmrCan PIPEDA Case Summary #69; [2003] PrivCmrCan PIPEDA Case Summary #146.

AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback