Home
| Databases
| WorldLII
| Search
| Feedback
Privacy Law and Policy Reporter |
Lee Bygrave
In this article, an analysis is carried out of art 15 of the 1995 EC Directive on Data Protection (the Directive).[1] Article 15 grants persons a qualified right not to be subject to certain forms of fully automated decision-making. It is one of the most intriguing and innovative provisions of the Directive, yet also one of the most difficult to construe properly. The central issue taken up in this article concerns the extent to which art 15 may have a meaningful impact on automated profiling practices.
Article 15(1) reads as follows:
Member States shall grant the right to every person not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc.
Up until very recently, provisions along the lines of art 15(1) have been rare among data protection instruments at both national and international levels. While their roots in data protection law go back to the late 1970s — more specifically to ss 2-3 of the French data protection legislation enacted in 1978[2] — less than a handful of countries incorporated such provisions in their data protection laws prior to adoption of the Directive.[3] The inclusion in the Directive of the right provided by art 15(1) partly reflects a desire, as expressed in the Directive’s preamble, to bring about a ‘high’ level of data protection across the EU.[4]
Soon a relatively large number of countries will have enacted provisions along the lines of art 15(1), mainly, though not exclusively,[5] as a result of the Directive. The overwhelming majority of these countries, however, will be European, at least for the near future. The extent to which non-European countries will enact provisions similar to art 15(1) remains unclear — an issue that is taken up in the concluding section of this article.
As a data protection provision, art 15(1) is rather special in that, unlike the bulk of other rules in data protection instruments, its primary formal focus is on a type of decision as opposed to data processing. As such, art 15(1) is akin to traditional administrative law rules on government decision-making. This characteristic, though, does not have large practical significance given that decisions inevitably involve the processing of data. Moreover, the impact of art 15(1) is likely to be considerably greater on the decision-making processes of the private sector than on the equivalent processes of the public sector, at least in jurisdictions with administrative law regimes that already provide broad rights of appeal against the decisions of government bodies (though not against private sector organisations).[6]
Article 15(1) is also rather special in that it is the only provision of the Directive to grapple directly with particular aspects of automated profiling.[7] Generally speaking, profiling is the process of inferring a set of characteristics (typically behavioural) about an individual person or collective entity and then treating that person or entity (or other persons or entities) in the light of these characteristics. As such, the profiling process has two main components: (1) profile generation — the process of inferring a profile; and (2) profile application — the process of treating persons or entities in light of this profile. The first component typically consists of analysing personal data in search of patterns, sequences and relationships in order to arrive at a set of assumptions (the profile) based on probabilistic reasoning. The second component involves using the generated profile to help make a search for, and/or decision about, a person or entity. The line between the two components can blur in practice, and regulation of the one component can affect the other component.[8]
On its face, art 15(1) only lays restrictions on the process of profile application. The same applies with earlier versions of the provision as contained in the first and amended proposals for the Directive.[9] This is in contrast to the original proposal for the Directive on Telecommunications Privacy which specifically restricted the creation of electronic subscriber profiles.[10] Nevertheless, the controls set by art 15(1) on profile application are likely to have an indirect effect on the process of profile creation. At the same time, there are no other provisions in the Directive specifically addressing the creation of profiles. This sort of situation is far from unique; the vast majority of data protection laws lack such provisions.
Article 15(1) does not take the form of a direct prohibition on a particular type of decision-making (profile application). Rather, it directs each EU Member State to confer on persons a right to prevent them being subjected to such decision-making. Hence, a legally adequate implementation of art 15(1) may occur when national legislators simply provide persons with the opportunity to exercise such a right. This would leave the actual exercise of the right to the discretion of each person and allow, in effect, the targeted decision-making to occur in the absence of the right being exercised (provided, of course, that the data processing operation involved in the decision-making meets the other requirements of the Directive and of national laws implementing the Directive).[11] This notwithstanding, national legislators are not prevented from implementing art 15(1) in terms of a prohibition on targeted decision- making.[12] However, such a prohibition cannot be absolute given that certain exceptions to the right in art 15(1) are mandated pursuant to art 15(2) and art 9. The scope and impact of these exceptions are dealt with later in this article.
Article 15 derives from several concerns. The central concern is rooted in the perceived growth of automatisation of organisational decisions about individual persons. The drafters of the Directive appear to have viewed as particularly problematic the potential for such automatisation to diminish the role played by persons in shaping important decision-making processes that affect them. In relation to the forerunner to art 15(1) as contained in the 1990 Directive proposal, the EC Commission stated:
This provision is designed to protect the interest of the data subject in participating in the making of decisions which are of importance to him. The use of extensive data profiles of individuals by powerful public and private institutions deprives the individual of the capacity to influence decision-making processes within those institutions, should decisions be taken on the sole basis of his ‘data shadow’.[13]
A second expressed fear is that the increasing automatisation of decision-making processes engenders automatic acceptance of the validity of the decisions reached and a concomitant reduction in the investigatory and decisional responsibilities of humans. In the words of the Commission:
the result produced by the machine, using more and more sophisticated software, and even expert systems, has an apparently objective and incontrovertible character to which a human decision-maker may attach too much weight, thus abdicating his own responsibilities.[14]
One can also read into these comments a concern that, in the context of organisational decision-making, the registered data images of persons (their ‘data shadows’) threaten to usurp the constitutive authority of the physical self despite their relatively attenuated and often misleading nature. A further concern is that this threat brings with it the threat of alienation and a threat to human dignity.[15]
There can be little doubt that the concerns outlined above should be taken seriously. While fully automated decision- making in the manner described by art 15(1) still seems far from widespread, computers are frequently involved in executing assessments that have previously been the preserve of human discretion — for example, in the context of determining persons’ credit ratings, insurance premiums or social welfare entitlements. Up until recently, such assessments have tended to be based primarily on data collected directly from the data subjects in connection with the assessment at hand. It is likely, though, that these assessments will increasingly be based on pre-collected data found in the databases of third parties. Indeed, with effective communication links between the databases of large numbers of organisations, sophisticated software to trawl these databases, and appropriate adaptation of the relevant legal rules (that is, an ‘automationsgerechte Rechtsetzung’),[16] it is easy to envisage computerised decision-making processes that operate independently of any specific input from the affected data subjects.[17]
Additionally, there is ongoing growth in the frequency, intensity and ambit of organisational profiling practices. Not only is profiling an emergent industry in its own right,[18] but the techniques upon which it builds (such as data warehousing and data mining)[19] are ever more sophisticated. Further, the efficacy of such techniques is now being enhanced through the use of artificial neural networks[20] and intelligent agents.[21] Part and parcel of the increasing sophistication of profiling techniques is their increasing automatisation as evidenced, for example, in cybermarketing practices.[22]
The right contained in art 15(1) is closely related to several other provisions in the Directive. To begin with, the right extends — and, to some extent, compensates for — the more general albeit relatively weak right in art 14(a) of data subjects to object to the processing of data relating to them when there are ‘compelling legitimate grounds’.[23] Secondly, art 15(1) helps to strengthen the right in art 14(b) of data subjects to object to data on them being processed for the purposes of direct marketing. Thirdly, art 15(1) contributes to reinforcing the rule in art 6(1)(a) that personal data be processed ‘fairly’. Finally, note should be taken of art 12(a), which provides data subjects with, inter alia, a right to ‘knowledge of the logic involved in any automated processing of data concerning him at least in the case of the automated decisions referred to in Article 15(1)’. This last right is an important complement to the provisions of art 15 and helps to flesh out some of their requirements.[24]
For the right contained in art 15(1) to apply, four cumulative conditions must be satisfied:
A considerable amount of ambiguity inheres in these conditions. This ambiguity is scarcely mitigated by the Directive’s recitals or travaux préparatoires.
Neither the Directive nor its travaux préparatoires specifically address what is required for a decision to be made. Nevertheless, it is fairly obvious that making a decision about an individual person ordinarily involves the adoption of a particular attitude, opinion or stance towards that person. Such an attitude or stance can be of numerous kinds. For example, it can require the person to act or refrain from acting in a certain way, or it can involve acceding to or denying a particular request from the person. Alternatively, it can result in action being taken to influence the person with or without his or her knowledge.
Difficulties may sometimes arise in distinguishing decisions from other processes (for example, plans, suggestions, advice or mapping of options) that can prepare the way for, or head off, formal decision-making.[25] At the same time, it is important to note that art 15(1) does not operate with any prima facie requirement that a decision be of a certain form. Further, the notion of decision in art 15(1) is undoubtedly to be construed broadly and somewhat loosely in light of the provision’s rationale and its otherwise detailed qualification of the type of decision it embraces. Thus, the mere fact that a process is formally labelled or perceived as a plan or an advice would not be sufficient in itself to bring the process outside the ambit of art 15(1). Nevertheless, if a decision is to be caught by art 15(1), it must have some degree of binding effect on its maker (such that the latter is likely to act upon it). This follows partly from the very concept of a decision and partly from the requirement that the decision must have legal or otherwise significant effects on the person whom the decision targets (see ‘Condition 2’ below).
Some uncertainty as to whether a decision has been made could pertain to situations in which a human decision- maker is apparently absent; that is, when the process at hand consists of a response on the part of computer software (such as an intelligent agent) to particular constellations of data and data input. This issue is actualised by certain profiling practices in the context of cybermarketing. For instance, the advertising banners on internet websites are frequently programmed to adjust automatically their content and/or format according to the net browsing data about the site visitor which are stored as ‘cookies’ on the visitor’s computer.[26] Does such adjustment involve a ‘decision’ being made?
In support of a negative answer, it could be argued that the term ‘decision’ ordinarily connotes a mental action (the adoption of a particular opinion or belief). An affirmative answer, though, has stronger foundations. On the one hand, it can be plausibly argued that the term ‘decision’ should be construed broadly for the reasons set out above. In light of this, the logical processes of computer software would seem to parallel sufficiently the processes of the human mind to justify treating the former as analogous to the latter for the purposes of art 15(1). On the other hand, it can be plausibly argued that a human decision- maker will still exist even if he or she is not directly involved in the process concerned. That decision-maker will be the person who is responsible for programming the software.[27]
Regarding condition 2, it is relatively clear what ‘legal effects’ involve. These are effects that are able to alter or determine (in part or in full) a person’s legal rights or duties.
Ambiguity with respect to condition 2 inheres mainly in the notion of ‘significantly’. Does the notion refer only to effects that are significant for the data subject in an objective sense (that is, is it relatively independent of the data subject’s own perceptions)? Does it refer only to effects of a material (economic) nature? Does it require the decision concerned to be adverse to the interests of the data subject?
Given the thrust of recitals 9 and 10,[28] together with the likelihood that the Directive requires recovery for both material and immaterial damage pursuant to art 23,[29] it is doubtful that ‘significantly’ refers exclusively to material effects. Arguably, therefore, a significant effect might lie merely in the insult to a data subject’s integrity and dignity which is occasioned by the simple fact of being judged by a machine, at least in certain circumstances (such as when there is no reasonable expectation of, or reasonable justification for, the sort of decision- making described in art 15(1)). Moreover, if we accept that an important part of the rationale for the right in art 15(1) is the protection of human integrity and dignity in the face of an increasingly automated and inhuman(e) world, some consideration must be given to how the data subject perceives the effect of the decision concerned. Nevertheless, the criterion ‘significantly’ also has objective (inter-subjective) connotations. Thus, a data subject’s perception of what constitutes a significant effect on him or her is very unlikely to be wholly determinative of the issue; the legal weight of the perception will depend on the extent to which it is regarded by a considerable number of other persons as having a reasonable basis.
Safeguarding the interests of the data subject requires that assessment of what is a significant effect is not based solely on the data subject’s own reactions. Consider, for example, a situation in which a person who is considering whether to apply for a bank loan interacts with a fully automated loans assessment service offered by a bank. As a result of this interaction, the person is informed that he or she qualifies for a loan of a certain sum under certain conditions. The terms of this assessment could be viewed by the person as favourable yet fail to give an objectively accurate depiction of how much and under what conditions the person would be able to loan because, for instance, the program steering the assessment does not take into account certain details about the person’s life situation — indeed, were the latter details taken into account, the person would qualify for a higher loan with more favourable repayment conditions. In such a situation, the data subject might well experience the assessment decision as unproblematic despite its objective faults. Paradoxically, however, this sort of situation could fall outside the scope of art 15(1) on account of the provisions in art 15(2), which are described below.
As for the issue of whether the criterion of ‘significant(ly)’ requires the decision concerned to be adverse to the interests of the data subject, an earlier draft of the Directive expressly limited the right in
art 15(1) to such decisions.[30] However, this fact alone does not mean we should read the same limitation into the final version of art 15(1). Indeed, the very fact that the term ‘adversely’ has been dropped from art 15(1) might suggest an intention not to limit the scope of the right in such a way. Still, it is extremely doubtful that art 15(1) will apply when a decision has purely beneficial effects for the data subject. This follows partly from art 15(2), described below. At the same time, there exists a large amount of conceptual (and practical) overlap between the notions of ‘significantly’ and ‘adversely’. This overlap notwithstanding, the criteria cannot be read as fully commensurate with each other. Some adverse effects can be too trivial to be significant. In other words, the fact that a decision has adverse effects is merely a necessary but not sufficient condition for finding that the decision has significant effects. Thus, what is required is a decision that is significantly adverse in its consequences.
On the latter point, the EC Commission seems to have been of the opinion that simply sending a commercial brochure to a list of persons selected by computer does not significantly affect the persons for the purposes of art 15(1).[31] Other commentators view advertising (or at least certain forms of advertising) as too trivial to be significant.[32] Nevertheless, some forms of advertising have at least a potential to significantly affect their targets. For instance, the cybermarketing process outlined above could plausibly be said to have a significant (significantly adverse) effect on the persons concerned if it involves unfair discrimination in one or other form of ‘weblining’ (for example, if a person visiting a website is offered products or services at a higher price than others, or a person is denied an opportunity of purchasing products or services that are made available to others).[33]
Finally, there can be little doubt that a decision may have a significant effect on the data subject even if it does not result in a manifest or positive alteration of his or her situation vis-à-vis other persons. In other words, art 15(1) may apply even if the decision concerned is used to refrain from changing the status quo (for example, psychometric testing of job applicants which results in none of them being offered jobs).
Moving to condition 3 (the decision is based solely on automated data processing), the main problem here is to determine the proper meaning of the criterion ‘solely’. If the criterion is read very strictly, one could argue that few, if any, decisions are or can be wholly the result of automated processes because the programs steering these processes are initially created by human beings.[34] But such an argument deprives art 15(1) of any practical effect. Thus, it is necessary to operate with a relative notion of ‘solely’. What the notion seems intended to denote is a situation in which a person fails to actively exercise any real influence on the outcome of a particular decision-making process. Such a situation would exist if a decision, though formally ascribed to a person, originates from an automated data processing operation, the result of which is not actively assessed by either that person or other persons before being formalised as a decision.[35]
At the same time, it is important to note that if a data subject successfully exercises his or her right to object pursuant to art 15(1), the data controller is simply required to review critically the criteria or factors forming the basis for the fully automated decision. The controller is not required to change these criteria or factors, nor to supplement them with other criteria or factors. Nevertheless, the review process might well involve these sorts of amendments being made.
Such a review process will be partly facilitated by the data subject’s right under art 12(a) to knowledge of the logic behind decisions of the kind embraced by art 15(1). The existence of this right means, in effect, that decision-makers themselves must be able to comprehend the logic of the automated steps involved. This further means, in effect, that the logic be documented and that the documentation be kept readily available for consultation and communication (both inside and outside the decision- maker’s organisation).[36] The docum-entation must set out, at the very least, the data categories which are applied, together with information about the role these categories play in the decisions concerned.
As for condition 4 (the data processed are intended to evaluate ‘certain personal aspects’ of the data subject), this does not necessitate, on its face, the construction of a formalised profile of the data subject.[37] In practice, however, the use of profiling techniques and the creation of some sort of personality profile will be required, though the profile need not be formalised as such.
It would seem that art 15(1) indirectly covers some use of abstract profiles,[38] as the term ‘data’ is not directly qualified by the adjective ‘personal’. The latter fact means that the profile could be derived from ‘clickstream’ data (such as domain names, websites visited or keywords used in search programs) that are somewhat difficult to classify as ‘personal’ data pursuant to data protection legislation because they are linked primarily to an internet protocol address of a computer, not an individual person. Ultimately, though, the decision to which a person may object must be based on a profile of that person. At the same time, there is no requirement that the profile casts the person in a particular (positive or negative) light.
The chief point of uncertainty with condition 4 is the scope of the phrase ‘certain personal aspects’. There is no doubt that the phrase ‘personal aspects’ refers to aspects of the data subject’s person or personality.[39] Moreover, there is little doubt that inclusion of the word ‘certain’ means that not all ‘personal aspects’ are legally relevant for the application of art 15(1). The question arises as to where and how the line is to be drawn between legally relevant ‘personal aspects’ and those aspects that are not legally relevant. Some aid is provided by the non-exhaustive exemplification in art 15(1) itself (‘work performance’, ‘creditworthiness’, ‘reliability’ and ‘conduct’). This exemplification indicates that legally relevant ‘personal aspects’ must concern a person’s abilities, behaviour, preferences or needs; that is, they must concern a person’s character. They must concomitantly have a degree of complexity.[40] Thus, quantitative data on purely physiological traits (such as a person’s physical speed of reaction or blood type) are unlikely in themselves to constitute ‘personal aspects’ unless they are combined with other data that connect them more directly to a person’s character (such as the data being applied to evaluate a person’s degree of diligence or negligence in a particular context).[41]
The exemplification in art 15(1) further indicates that ‘personal aspects’ need not relate primarily to the private (non-public) or domestic (non-professional) sides of a person’s character. There would also appear to be no necessity that these aspects are unique to the person. It is otherwise difficult at this stage to make reasonably determinative statements about the reach of the phrase ‘certain personal aspects’.
Nevertheless, there can be little doubt that art 15(1) will not apply to a fully automated decision by a bank to refuse a person cash simply because the person lacks the necessary credit in his/her bank account.[42] A different result would arise, however, if the decision concerned were grounded on a fully automated analysis of the person’s payment history; this would involve an evaluation of creditworthiness in the more personal sense envisaged by art 15(1). Less certain is whether art 15(1) would apply to a fully automated decision about a person’s eligibility for a retirement pension, when the decision is grounded simply on the level of the person’s income and financial assets. There is no obvious answer to the question.[43] At first glance, these data types appear relatively neutral in terms of what they indicate about a person’s character. Yet they are sufficient to constitute a rudimentary personality profile when linked together and it might well be possible to derive latent aspects of a person’s character from this linkage. Moreover, they are sufficient to give a reasonable indication of a person’s creditworthiness. Thus, solid grounds exist for arguing that art 15(1) embraces the above type of decision on pension eligibility.
The right in art 15(1) may be derogated from pursuant to art 15(2) and art 9.[44]
Article 15(2) allows a person to be subjected to a decision referred to in art 15(1) in the following situations:
(1) where the decision is taken in the course of entering into or executing a contract, and either the data subject’s request for the entering into or execution of the contract has been fulfilled or provision is made for ‘suitable measures’ to safeguard the person’s ‘legitimate interests’ (art 15(2)(a)); or(2) where the decision ‘is authorised by a law which also lays down measures to safeguard the data subject’s legitimate interests’ (art 15(2)(b)).
Article 15(2) stipulates that both its sets of derogations must be incorporated into the legal regimes of EU member states, though ‘subject to the other Articles of this Directive’.
How problematic this is from a data protection perspective will depend partly on the nature of the ‘suitable measures’ for safeguarding the interests of data subjects.
An example of a ‘suitable measure’ in the first situation delineated by art 15(2) is described as ‘arrangements allowing [the data subject] ... to put his point of view’ (art 15(2)(a)). Given the rationale for art 15, it is to be presumed that these arrangements must not only allow for the data subject to put his or her point of view but also ensure that this point of view is received and taken into account by those who are formally responsible for the decision concerned.[45] It is further to be presumed that the arrangements must allow for the data subject’s viewpoint to be expressed before any final decision is made.[46]
This example of a ‘suitable measure’ is undoubtedly pertinent for art 15(2)(b) as well. At the same time, the example is not intended to delineate the entire range of ‘suitable measures’ in both situations.
Independent of the issue of ‘suitable measures’, there is a significant problem from a data protection perspective in one of the assumptions that apparently underlies art 15(2)(a): this is that the derogation which art 15(2)(a) provides from the operation of art 15(1) seems to assume that fulfilment of a person’s request to enter into or execute a contract will always be unproblematic for that person. Such an assumption, however, is fallacious — as indicated by the bank loan example set out above. To take another example, art 15(2)(a) would seem to allow a person’s application for employment to be decided solely on the basis of psychometric testing if he or she is given the job (that is, his or her request to enter into a contract is met). Yet such testing can have detrimental consequences for the person concerned (and for the quality of employment application processes generally). For instance, the person could well regard such testing as demeaning to his or her dignity, or the testing could fail to reveal that the person is qualified for another, more favourable position.
Turning to art 9, this requires certain derogations from art 15(1) — and from other provisions in Chs III, IV and VI of the Directive — in the interests of freedom of expression. More specifically, art 9 requires derogation from these provisions insofar as the processing of personal data ‘is carried out solely for journalistic purposes or the purpose of artistic or literary expression’ and the derogation is ‘necessary to reconcile the right to privacy with the rules governing freedom of expression’.
It is difficult to see how art 9 can have any extensive practical relevance for the sort of decision-making dealt with by art 15(1). One can envisage, though, the emergence of a kind of automated journalism that bases its portrayals of the character of particular persons exclusively on the automated searching and combination of data from, say, various internet sources. This sort of activity might have a chance of falling within the scope of art 15(1), though its ability to meet the decisional criteria of the provision is uncertain.
At first glance, art 15 shows much promise in terms of providing a counterweight to fully automated profiling practices. On closer analysis, however, we find that this promise is tarnished by the complexity and numerous ambiguities in the way the provisions of art 15 are formulated. These problems are exacerbated by a paucity of authoritative guidance on the provision’s scope and application. The efficacy of art 15 as a regulatory tool is further reduced by the fact that its application is contingent upon a large number of conditions being satisfied; if one of these conditions is not met, the right in art 15(1) does not apply. As such, the right in art 15(1) resembles a house of cards. In the context of currently common data processing practices, this house of cards is quite easy to topple. Nevertheless, this situation might well change in the future if, as is likely, automated profiling becomes more extensive.
Even now, art 15 is normatively important in terms of the principle it establishes and embodies. This principle is that fully automated assessments of a person’s character should not form the sole basis of decisions that significantly impinge upon the person’s interests. The principle provides a signal to profilers about where the limits of automated profiling should roughly be drawn. We see also that this principle is beginning to be elaborated upon in concrete contexts, such as the assessment of worker conduct.[47]
At the same time, though, the ‘safe harbor’ agreement which has been concluded recently between the US and EU[48] and which stipulates conditions for permitting the flow of personal data from the EU to the US, puts a question mark over the status of the art 15 principle for non-European jurisdictions. The principle is nowhere to be found in the terms of the agreement.[49] Hence, other non-European countries will probably not be required by the EU to implement the principle either. So far, legislators in these countries have shown little willingness to implement the principle of their own accord.[50]
This situation is unfortunate. Despite its special character relative to the bulk of other data protection rules, the principle laid down by art 15 should be regarded as a core data protection principle: that is, a principle that is indispensable for defining the future agenda of data protection law and policy, and one that should therefore be embodied in most, if not all, future data protection instruments around the globe.[51] Otherwise, data protection instruments risk being deprived of a significant (albeit imperfect) counter-weight to the ongoing expansion, intensification and refinement of automated profiling practices.
Lee Bygrave, Research Fellow, Norwegian Research Centre for Computers and Law.
AustLII:
Copyright Policy
|
Disclaimers
|
Privacy Policy
|
Feedback
URL: http://www.austlii.edu.au/au/journals/PrivLawPRpr/2000/40.html