AustLII Home | Databases | WorldLII | Search | Feedback

Journal of Law, Information and Science

Journal of Law, Information and Science (JLIS)
You are here:  AustLII >> Databases >> Journal of Law, Information and Science >> 2004 >> [2004] JlLawInfoSci 2

Database Search | Name Search | Recent Articles | Noteup | LawCite | Help

Hynes, Paul F --- "Doctors, Devices and Defects: Product Liability for Defective Medical Expert Systems in Australia" [2004] JlLawInfoSci 2; (2004) 15 Journal of Law, Information and Science 7

Doctors, Devices and Defects: Product Liability for Defective Medical Expert Systems in Australia

PAUL F HYNES*

Abstract

Australia is committed to the adoption of electronic decision support (Medical Expert Systems or MES) as a vehicle for the reduction of medical error. The take up of these technologies, however, will depend in part on the balance achieved by the law, between the appropriate regulation of such technologies and the extent to which manufacturers and developers may face direct liability for injuries associated with their use. Presently, manufacturers of such systems may face strict liability under Australia's product liability laws for injury caused by ‘defective goods’. As ‘medical devices’, such systems are also regulated under the Australian therapeutic goods regime.

For a number of historical and interpretational reasons, product liability provisions are unlikely to provide an effective cause of action in relation to MES. This is because computer systems and the information contained within them are not easy to characterise as ‘goods’, the question of whether software defects are manufacturing or design defects, difficulties in determining whether a defect arose before or after supply, and the issue of whether reliance on information (mediated or otherwise) can itself be said to be a proximate cause of injury. This paper takes the view that were product liability laws to have operation, it is unlikely to be such as to impose strict liability on the manufacturers of MES for design defects.

A regulatory balance may, however, be achieved through legislative regime regulating the use of therapeutic goods. This regime is not affected in its operation by the interpretational difficulties referred to above in connection with product liability, and allows for the assessment of the devices on the basis of the purpose for which, and the conditions under which, the devices were intended to operate.

1. Introduction and survey of the landscape

Australians watching CNBC coverage of the war in Iraq, a year or so ago, would have seen advertisements showing a filing clerk, moving at lightning speed inside a vast warehouse to retrieve a dusty file for a doctor in a white lab coat. The advertisement was for General Electric and their electronic patient record management systems. While not aimed specifically at an Australian market, the advertisement was a harbinger of Australia’s commitment to these technologies.

In its November 2002 report (‘the Taskforce Report’), ‘Electronic Decision Support for Australia’s Health Sector’,[1] the National Electronic Decision Support Taskforce noted that ‘Australian Health Ministers are committed to improving the delivery of health services …through the use of information and communication technologies’,[2] and that electronic decision support was a ‘key mechanism to enable health professionals to provide high quality health services to consumers’.[3] The aim of the report was to develop a governance framework and develop recommendations ‘to advance industry delivery of evidence-based decision-support systems’.[4] Such computerised systems are also known as Medical Expert Systems, or ‘MES’.

The use overseas, of such computerised systems, particularly in the USA, is recognised as having enormous potential to improve healthcare outcomes and reduce iatrogenic injury, generally through the standardisation of treatment based on evidence-based medicine and specifically in reducing the incidence of drug related prescribing error.[5]

There are many factors, however, which might constrain the adoption of this technology.[6] Practitioners may resist such an initiative on the basis that it would interfere in the patient-practitioner relationship, that their professional judgment would be held to examination if they chose to give different treatment options or diagnoses, and that cost management rules within the system may not be transparent.[7]

In Australia, the Taskforce Report only made passing reference to legal liability implications (stating that the legal position was untested in Australia, and that the legal questions arising from a failure to adopt such technology ‘cannot be answered with certainty at this time’[8]). Nevertheless, it is clear that one of the legal issues of critical importance to the likely development and take-up of the new diagnostic technologies[9] is the imposition of liability for patient injury, on those who develop or use such systems.

The aim of this paper is to look at the consistency, with the Taskforce's stated goal, of advancing industry delivery of MES, with two pieces of Australian legislation. Specifically, this paper examines the potential for liability of those who manufacture such systems under the product liability provisions in Part VA of the Trade Practices Act 1974 (Cth)(‘Part VA’). The paper also examines an alternative regulatory approach involving the direct imposition of sanctions under the Therapeutic Goods Act 1989 (Cth) (‘TGA’).

The focus of this paper is not the myriad of liabilities which may arise between patient and doctor through the doctor's use of MES, or the specific contractual remedies between a doctor and the producers of MES, although such liabilities and remedies are relevant to the legislative and policy environment being discussed.

This Part of the paper will seek to outline the broader legal context for this enquiry as it appears in the literature. Part 2 of the paper will describe MES, examine their structure, workings and application and set out the different circumstances in which failure may occur.

Part 3 of the paper will briefly describe Australian product liability rules under Part VA and apply them to the classes of potential defect identified. Part 4 of the paper will briefly look at the operation of the TGA, and examine the extent to which the policy imperatives set out in the Taskforce Report are supported by the likely application of Australian law both under the TPA and TGA.

The conclusion of the paper will be stated in Part 5. This paper will take the position that in Australia, Part VA is unlikely to have any clear operation in relation to potential injuries resulting from the use of MES and that were it to operate, liability for defects would likely be assessed on a negligence basis rather than a strict liability basis. This paper also concludes, however, that the TGA has the potential to deter defectively designed medical devices through the direct imposition of sanctions rather than providing for individual recovery.

Where injuries arise through the use of expert systems, a practitioner, or institution in which the practitioner operates[10] may be found liable, in the context of a malpractice suit, on the basis of the practitioner’s or institution’s choice of, use of, or reliance on such systems.[11] The developers of the system might in turn be liable to the practitioner or institution through contract or warranty mechanisms,[12] or might be found to be directly liable to an injured patient on the basis of product liability.[13] As noted above, the principal focus of this paper will be the potential under Part VA for manufacturers of MES to be subject to product liability claims directly by an injured party. The importance of product liability stems from its capacity to bypass issues of privity (together with other contractual limitations) and impose strict liability.

The application of product liability provisions to MES, gives rise to the threshold question of whether an injury can be said to be caused by the system itself, in the sense of that the injuries in question arise not from a direct physical exposure to the system, but through reliance by the injured party, whether or not professionally mediated, on the information produced or contained within the system.

As the concern is generally not with paper cuts or electrical shocks, the injury is usually one arising from the diagnostic error in reliance on the system. The question here is traditionally whether the information or advice is a ‘product’ governed by product liability rules. While the resolution of this question can depend on the extent to which the information exchange is mediated by a professional, the decided cases tend to use the device of characterising the information as being other than a product.[14]

In situations where there has been no obvious mediation, the discussion usually takes place in the context of publisher or author liability for defective information contained in books (potentially also cybermedicine websites). Such discussions recite US cases in which plaintiffs have become injured as the result of eating non-edible fungi,[15] diving onto rocks[16] or self-administering enemas in reliance on guidebooks or medical texts.[17] In these cases, the information was not held to be a product.[18]

In the context of mediated information flow, the cases seem to go both ways. With the dental equivalent of a CPG being held to be part of a service,[19] while aeronautical charts which failed to show features of the landscape into which planes subsequently crashed were held to be products by analogy with aircraft navigations systems.[20]

In theory, the extent to which a publisher of material will be liable is generally seen to reduce where there is competent mediation. In the US context, discussion in relation to the effect of a ‘learned intermediary’ takes place predominantly with cases involving prescription drugs and the ‘failure to warn’,[21]

where the physical damage is done by the product supplied, even though a proximate cause was reliance on the defective information accompanying the product.

Three further interrelated layers of discussion have a bearing on the application of product liability to MES. The first of these layers relates to the appropriate product liability tests in dealing with design defects, as opposed to manufacturing defects. As expert systems involve software, an assessment must be made of whether a defect in the software is a product defect to which liability attaches, and the level of that liability. A second layer relates to the characterisation of ‘defect’. MES are constructed on rules developed from evidence-based medicine (EBM) and clinical practice guidelines (CPGs). The characterisation of a ‘defect’ in relation to such rules, particularly where the rules incorporate limitations relating to cost-effectiveness in a seamless fashion, may be hard to determine.[22] A final layer in the discussion relates to impact of free speech doctrines, such as in relation to US First Amendment jurisprudence.

2. Medical Expert Systems

2.1 The systems

The Taskforce defined electronic decision support as ‘access to knowledge stored electronically to aid patients, carers and service providers in making decisions on health care’.[23] A clinical decision support system may be defined as a computer program designed to help health professionals make clinical decisions.[24]

The focus of this paper is on medical expert systems which provide patient-specific diagnostic support, nevertheless, it must be noted that there are many applications of software which may be included in decision support, ranging from general information management tools to specific patient-related diagnostic tools.[25]

Healthcare information systems (which may deal with a range of hospital administration matters) and information retrieval systems are tools which may provide information needed by the practitioner but generally do not apply that information to a specific decision.[26] Other applications may be designed to focus the attention or remind the practitioner – such as laboratory systems that flag abnormal values, and pharmacy systems which alert practitioners to possible interactions. Such systems typically use simple logics ‘displaying fixed lists or paragraphs as a standard response’.[27]

There are systems which are used for therapy critiquing and planning (where the system will look for inconsistencies, errors and omissions in treatment plans)[28] and image recognition and interpretation (where the system might flag abnormalities in a range of image types, including X-rays, angiograms, CT and MRI scans).[29]

Finally, there are systems which provide patient-specific assessments based on sets of patient-specific data.[30] An expert system may be described as an ‘interactive computer program that uses knowledge stored in a data base to provide solutions and explanations…in a narrow and specific area’,[31] and may include any computerised system which provides its users with ‘guidance, advice, decisions, suggestions, courses of action…which are intended to influence the user’s behaviour and which are represented …as being derived from the data and logical rules that would be used by a human being highly trained and knowledgeable in the subject matter’.[32]

Expert systems may follow simple logics or algorithms, they may be based on decision theory and cost benefit analysis or symbolic problem solving.[33] Some systems suggest differential diagnoses (eg DXplain[34] or QMR), some suggest a single best explanation for a patient’s symptoms (eg Internist-1), while others interpret and summarise a patient’s record over time in a fashion sensitive to the clinical context.

Most expert systems operate within a ‘domain’,[35] a limited area of expertise such as infectious diseases or histopathology.[36]

When expert systems attempt to deal with problems outside their domain, the results are often unpredictable, and the user of the system has responsibility for ensuring that the system is not being used outside its domain.[37]

An MES is generally made up of a number of components. It will have a comprehensive and current knowledge base, drawn from high quality evidence.[38]

It will have an inference engine to implement the decision rules[39] (possibly incorporating experiential learning rules).[40] It will also have a patient record database [41] and user interfaces.[42]

The MES takes the medical knowledge base and the information from the patient record database and operates on it using a complex set of diagnostic and treatment algorithms, together with feedback which is provided as the result of a structured series of prompts.[43]

Although referred to as algorithms,[44] the techniques used have traditionally been one or more of the following: categorical reasoning[45] (relational algebra or “if-then”rules), probabilistic reasoning[46] (quantitative handling of uncertainty and probability based on Bayes theorem) and symbolic reasoning (symbolic reasoning techniques as used in artificial intelligence research).[47] Additionally higher level techniques may be employed to deal with situations where the health data supplied are inaccurate, incomplete or inconsistent.[48] Such higher techniques may rely on heuristic systems which combine categorical and/or symbolic reasoning with probabilistic reasoning, and fuzzy set theory.[49]

While acceptance of any of these techniques depends on the user’s beliefs in computer-based modelling techniques, systems ‘which have a broad application domain tend to be rule-based with mostly categorical and probabilistic reasoning’,[50] closely reflecting the approach seen in paper-based clinical practice guidelines which generally underpin the knowledge base for medical expert systems.[51]

Clinical practice guidelines (CPGs – also called ‘practice parameters’ or ‘critical pathways’)[52] have been defined as ‘systematically developed statements to assist practitioner and patient decisions about appropriate health care from specific clinical circumstances’ and ‘standardised specifications for care either for using a procedure or for managing a particular clinical problem’.

CPGs have been developed by a range of bodies, including practitioner organisations, government bodies, hospitals, MCOs and third party payers.[53] The most effective CPGs are those based on ‘evidence based medicine’ (EBM).[54] EBM draws heavily on outcomes research (rather than the traditional subjective and anecdotal medical knowledge) which involves computer analysis of ‘large amounts of encounter and treatment data to determine what works and what does not in terms of yielding the desired clinical outcomes: preservation of life, reduction of symptoms, restoration of normal function’.[55] Outcomes research centres on the two key concepts of ‘outcomes measurement and outcomes management’.[56] Outcomes measurement is the statistical evaluation of the effectiveness of particular types of clinical intervention, through the analysis of morbidity and mortality resulting from the intervention and their comparison to other similar types of intervention. Outcomes management refers to the iterative application of the protocols, outcomes measurement, revision of protocols and the application of the revised protocols.[57]

Additionally, the cost of different treatment regimes can be factored into CPGs in order to produce and index of both clinical effectiveness as well as cost-effectiveness.[58] The capacity of CPGs to provide an index of cost effectiveness therefore gives rise to a tension in the construction of CPGs between choices which improve and ensure the quality of care and choices aimed at cost containment.[59]

The historic lack of a certification regime[60]

for CPGs, coupled with the number of organisations preparing CPGs, means that practitioners may be faced with conflicting CPGs.[61] The problem is exacerbated when practitioners may be obliged to follow such CPGs which may provide for treatment of reduced effectiveness but increased cost effectiveness, as a result of contractual arrangements with MCOs and third party payers.[62]

2.2 The defects

An expert system could fail in a number of ways – the knowledge base could be wrong or out of date, the rules could be inappropriate; the software might contain a bug, data in the patient record database may have been entered incorrectly, the user interface might be misleading or confusing in some way. This paper will now examine the stages of development of a decision support program, and the possible faults which may arise at each stage of that process.

In the first instance, a decision support program would be designed by a systems analyst who would create a functional description of the necessary procedures to be used as a guide to the programmer.[63] For this, the systems analyst would draw upon the CPGs.[64] The CPGs might be defective in a number of ways. There could be a failure ‘in the analysis of the outcomes measurement data or negligence in translating such data into clinical recommendations’.[65] There could be failure in constructing the specifics of outcomes measurement studies or using data that the developer knew or ought to have known was inaccurate or insufficient, or consciously omitted ‘from a study a possible effective treatment’.[66] A defect might also be said to arise where there has been some lack of good faith in the preparation of CPGs, as might be asserted where cost criteria are allowed to overshadow clinical effectiveness.[67]

Another defect may arise in circumstances where the CPGs had not been updated. With rapid advances in medical technology and treatments, the developer would have to ensure that the CPGs reflected this.[68]

As noted above, CPGs are base on ‘evidence based medicine’ or EBM. EBM has been criticised on the basis that it is about averages, and the appropriate diagnosis and treatment in an individual case may fall outside the parameters based on EBM. In other words, while EBM may be highly predictive of correct diagnosis in general, it may be incorrect in individual cases.[69] In the courts, CPGs and the underlying EBM are also of uncertain status as evidence of appropriate practice, ranging in probative value from evidence of a ‘respectable minority practice’ to evidence of ‘customary practice’.[70] It might be argued in a given case that the EBM approach was not specifically suited to a particular individual’s treatments, and is to that extent, defective.

At the next stage, based on the functional description generated through the CPGs, the programmer would then create the program. This might involve the arrangement of raw data into tables, and the design of a program which can carry out a range of operations in relation to those or other databases, and carry out a series of logical operations on them.[71] The program would also have to generate appropriate user interfaces.[72]

Generally, a programmer would test the programs using artificially created test results and patient information, to assure the proper functioning of the program under all conditions.[73] Errors in typing and some errors in logic (perhaps arising from incorrect assumptions by the programmer) should be detected at this stage.[74] The final step in the creation of a program would involve its examination by the user, to ensure that there had been no misunderstanding on the part of the original system analyst.[75] If necessary, the cycle of design, programming and testing would be repeated to overcome inconsistent understandings on the part of the user, systems analyst and programmer, who might come from different backgrounds.[76] In the case of ‘off the shelf’ systems, the user may well be a medical practitioner or consultant within the manufacturer's organisation.

It is generally recognised, however, that the removal of all errors from a computer program is often impossible.[77] ‘Original program errors may go undetected for months or even years…for example if the error only becomes apparent under a unique set of circumstances …modifications …may introduce new errors, or cause previously present but unnoticed errors to become apparent’.[78]

Software defects could arise from typographical errors by the programmer who inputs the codes,[79] the programming of the inference engine to select the wrong rules to operate on the knowledge base,[80] or the use by the designer of a rule which is applied to all situations when it should be more limited in its application.[81] Errors may occur ‘if the program designer fails to anticipate an unusual input …and does not program the system to handle the input properly’[82] resulting in the breakdown of the system’s reasoning.

The patient database may contain factual errors due to human error on the input side. The user interface may also be defective, in that the outputs are misleading to the practitioner, such as where the system uses terms in a way which is counter intuitive or otherwise misunderstood by the user practitioner,[83] or a graphical representation which is misleading.[84]

In the context of cybermedicine, where the practitioner might be completely disintermediated, the system may also be unable to anticipate correctly the patient’s erroneous or incomplete inputs,[85] and may provide outputs which are misleading to that patient.

2.3 Design and manufacturing defects

Because different liability rules may apply,[86] it is necessary to differentiate which defects might be considered manufacturing defects, and which design defects. Manufacturing defects are ‘those which occur when the product is not manufactured according to its design’.[87] Design defects arise where there is a ‘defect in the underlying conception of the product’.[88] Where a product is found to be defective, it ‘impugns not just the product under consideration but all products of that particular design’.[89]

Where manufacturing defects are alleged, a court may make comparisons between the alleged defect and the specifications of concededly non-defective products of the same design.[90] Where design defect is alleged, it may be shown that there was a reasonable non-defective design available.[91]

Design defects may be intentional or unintentional.[92] An unintentional design defect is one which, like a manufacturing defect, defeats the function of the product,[93] such as the use of material which is not strong enough for the purpose for which it is used.[94] A ‘conscious’ design defect[95] is one which gives rise to a risk of which the manufacturer is aware, and may arise where the manufacturer has deliberately sacrificed safety to achieve some other goal.[96] Arguably, a conscious design defect may include situations where the manufacturer is aware of the potential of a product to fail in an ‘acceptable’ number of situations, even though those situations may not be predictable.

In looking at the medical expert systems, it would appear that some potential defects could well exist in each of these categories. The selection of EBM as the basis for the CPGs, the construction of the rule sets and selection of the logic type for the construction of the program and graphical representations in the user interface could all be classified as design choices. Manufacturing defects would include the more obvious hardware malfunction due to faulty components, and may include typographical errors by programmers and even the inadvertent use of out of date research material in the construction of the CPGs.

In relation to the computer program itself, difficulties may arise in differentiating the design from the production.[97] Because significant errors in complex systems do not reveal themselves until the system is up and running, it may not be possible to determine whether a given error occurred during production or whether it is a design flaw.[98] Indeed Lamkin sees the design/manufacture distinction as being ‘limited’ in its application to software defect.

To summarise, MES are complex and they may consist of hardware and software or they may consist of software alone. The system may be custom built for large institutions such as hospitals,[99] or may be mass-produced ‘off the shelf’ or ‘turn key’. The system may be intended simply to provide access to a database or a medical library or it may be intended to play a greater or lesser role in the diagnostic process. Diagnostic systems may produce incorrect diagnoses for a number of reasons. The defects may arise from conscious design choices or may be unintentional design or manufacturing defects. The characterisation of a defect as a design defect will have consequences for the issue of liability under US and Australian product liability law. Nevertheless, because of the nature of computer software, such a characterisation may be difficult in practice to achieve in relation to some aspects of the medical expert system.

The paper will now examine the way in which Australian product liability law under Part VA of the TPA might apply to defects of the sort described above. Part 3 of this paper will show that except in the case of obvious manufacturing defects in MES, consisting of hardware and software, product liability rules under the TPA are unlikely to have any successful application to MES defects.

3. Liability of manufacturers of medical expert systems under the product liability regime of the Trade Practices Act

3.1 Part VA of the Trade Practices Act 1974 – ‘defective goods’

The focus of this part of the paper is the question of whether under Australia’s product liability regime under Part VA of the TPA, the manufacturers of MES are liable to persons injured as a consequence of defects in those systems. At the outset it should be noted that Part VA is relatively recent[100] and that there is a relatively small volume of relevant Australian case law.[101]

Because of its extensive product liability jurisprudence, this paper will refer to US cases and commentaries for comparative purposes. Nevertheless the history, expression and underlying policy objectives of product liability law in Australia and the USA are quite different,[102 ]and influenced by different constitutional constraints.

The starting point is Part VA of the Trade Practices Act 1974 (Cth), which imposes liability on manufacturers for loss caused by defective products. Section 75AD provides that:

If:

a) a corporation, in trade or commerce, supplies goods manufactured by it; and

b) they have a defect; and

c) because of the defect, an individual suffers injuries;

d) then: the corporation is liable to compensate the individual for the amount of the individual’s loss….

The meaning of the expression ‘goods having a defect’ is set out in detail in section 75AC of Part VA, in the following terms:

(1) For the purposes of this Part, goods have a defect if their safety is not such as persons generally are entitled to expect.

(2) In determining the extent of the safety of goods, regard is to be given to all the relevant circumstances including:

a) the manner in which and the purposes for which they have been marketed; and…

b) what might reasonably be expected to be done with them or in relation to them; and

c) the time when they were supplied by their manufacturer….

Manufacturers have four defences under Part VA. Relevantly, a manufacturer will not be liable if it is established that the defect in the goods in question did not exist at the time of supply (s.75AK(1)(a)) or if the state of scientific or technical knowledge at the time when they were supplied by their actual manufacturer was not such as to enable the defect to be discovered (s.75AK(1)(c)).[103] The liability of a manufacturer will also be reduced where the relevant injury has been contributed to by an act or omission of the injured person.[104]

The terms of Part VA, in the context of MES, raise a number of questions. Are expert systems goods? What constitutes a defect? This involves a review of the statutory meaning of ‘goods having a defect’, against the indicia of safety set out in s.75AC(2). In particular a consideration of s.75AC(2)(e), ‘what might reasonably be expected to be done with, or in relation to them’ is likely to include whether or not there was professional intervention. Could an injury resulting from a wrong diagnosis be said to be caused by a defect in the goods? Again, the issue of professional intervention arises. Does the defect have to be a defect in a physical element comprising the good or something intangible which causes a good to be unsafe in the way that it is used?

3.2 Are medical expert systems ‘goods’?

The threshold question to be answered is whether a medical expert system is a ‘good’, and more particularly, whether it is a good rather than a service. The definition of goods under the TPA is an inclusive definition and includes animals, minerals, gas and electricity.[105 ] Being an inclusive definition, it allows room for the operation of the ordinary meaning of the word ‘goods’ as well as setting out a list of things which must be included. In seeking to ascertain what other things might come within the definition, regard would normally be had to consistency with that list, as well as the objects of the legislation in question.

The TPA, in addition to regulating product liability under Part VA, also regulates both sale and supply of consumer goods and services as well as a range of competitive practices. Any meaning given to the term goods would apply to the use of the term throughout the TPA, and be consistent with the different policy goals underlying the TPA.

There is little Australian judicial authority on whether computer software is a good, or on the distinction between goods and services in this regard, possibly because the term goods is distinguished from services only in a small number of provisions of the TPA which generally regulates goods and services in the same way.

The issue of whether software or coded electrical signals are goods or services has been considered in three Federal Court cases, Re: Pont Data Australia Pty Limited and: ASX Operations Pty Limited and Australian Stock Exchange Limited,[106] (and later, on appeal)[107 ]and Re: Caslec Industries Pty Ltd And: Windhover Data Systems Pty Ltd And David James Anderson.[108]

In Pont, the issue arose as to whether an electronic data stream was a ‘good’ for the purposes of section 49 TPA, (which deals with price discrimination in relation to goods).[109] Wilcox J noted that the TPA contemplates that everything to which it relates is either a good or a service, and that the definition of service excludes goods.[110] His Honour then observed ‘It cannot … be doubted that, as Parliament intended the word ‘goods’ to be understood as including electricity, it also intended it to include encoded electrical impulses’.[111]

This analysis was later rejected on appeal in the full Federal Court.[112] The full court did refer to Toby Constructions Products Pty Ltd v Computa Bar (Sales) Pty Ltd[113], where Rogers J held that that a sale of a computer system, comprising both hardware and software, was a sale of ‘goods’ within the meaning both of the Sale of Goods Act 1923

(N.S.W.) and the warranties implied by Part V of the TPA. The full court also referred to his comment, with reference to US authorities, that he did not wish it to be thought he was of the view that software by itself may not be ‘goods’.[114] The full Court in ASX Operations stated that the issue raised by Rogers J was still remained open.

In Caslec, the court accepted that a contract was for an ‘off the shelf’ software package and incidental services, and that there had been a breach of an implied warranty for the supply of services under section 74(2).[115 ]

The Court seemed to accept, without analysis, that the ASX Operations had settled the issue in relation to software.[116]

In the UK Court of Appeal decision of St Albans City Council v International Computers Ltd,[117 ] Sir Iain Glidewell found that software was within the definition of a good for the purposes of the UK Sales of Goods Act 1979, s61, when contained in a physical medium.[118]

In the US, both in the cases and the literature, the question of whether something is a product is linked directly with the policy imperatives underlying strict liability,[119] as well as constitutional limitations relating to the First Amendment.[120] While a discussion of strict liability for software, or indeed any ‘commercial intellect’[121] under US law[122] is beyond the scope of this paper, a number of observations might usefully be made.

Factors such as tangibility, mass production, ownership, and the possibility of correcting defects are issues which may be seen as affecting the ordinary meaning of the term ‘goods’ as well as going to the underlying policy basis for imposing strict liability.

One of the criteria used in the courts in the US is ‘tangibility’. Strict liability ‘has been applied inconsistently to utilities including gas and electricity’[123] (already within the TPA definition of goods), and also in relation to the information content of certain publications independently of their physical medium.[124] Lanetti notes that software appears to be treated as a good for the purposes of article 2B of the Uniform Commercial Code (UCC) – based on the tangible nature of the media on which it has to date been characteristically stored.[125] Such an ‘associative analysis’ breaks down however, once the software is delivered online rather than in a tangible medium such as a disk or firmware.[126]

With few exceptions, US courts have generally failed to impose strict liability on the authors or publishers of books, where reliance on the information within those books has led to injury.[127] Mintz claims that US courts have made ‘an unprincipled distinction’ between the unreasonable risk of injury from a book itself, and the eminently foreseeable risk of injury from application of the information content of the book.[128] The courts, he claims, in their deference to free speech concerns, act inconsistently with First Amendment analysis. Ultimately the analysis of product definition is conflated with the issues of causation and free speech.

There is a line of cases, however, where errors contained in aeronautical charts led to air crashes. In these cases, the information content of the chart was held to be a product to which strict liability could apply.[129] In Winter v G P Putnam’s & Sons[130 ]a number of observations were made in relation to products which might be similar to aircraft instrument approach charts, which, in this line of cases, had been held to be products:

Aeronautical charts are highly technical tools. They are graphic depictions of technical mechanical data. The best analogy to an aeronautical chart is a compass – both may be used to guide an individual who is engaged in an activity requiring certain knowledge for natural features. Computer software that fails to yield the result for which it was designed may be another.[131]

In Aetna Casualty and Surety v Jeppeson,[132] one of the first of the aeronautical chart cases, the defendant's charts were drafted to assist pilots in making instrument approaches to airports. A defect in the chart caused a crash resulting in a number of deaths. The court found that the navigational chart was a defective product, that the defect (misleading graphical representation of albeit correct information) caused the crash, and the publisher was strictly liable under US products liability rules.[133 ]The court was of the view that the chart was intended to be relied upon to give an instantly understandable graphic representation of the approach, notwithstanding that the pilots might still be expected to use due care to make appropriate further enquiry.

Similar reasoning was applied in subsequent cases dealing with the aeronautical charts.[134] In Fluor,[135] it was stated that although a sheet of paper ‘might not be dangerous, per se, it would be difficult indeed to conceive of a salable commodity with more inherent lethal potential than an aid to aircraft navigation that…fails to list the highest land mass …surrounding the landing site’.[136 ]

Some commentators have taken the view that similar reasoning to this could be applied to computer software.[137] ‘A computer program, like a navigational chart, converts data into a more accessible or usable form, the results may be passed to a physician, or to a pilot, for professional evaluation. The professional must be able to rely on the accuracy of this information if he or she is to benefit from the…special compilation of the data’.[138]

The Third Restatement[139 ] now defines a product to be ‘tangible personal property distributed commercially for use or consumption. Other items, such as real property and electricity, are products when the context of their distribution and use is sufficiently analogous to the distribution and use of tangible personal property that it is appropriate to apply the rules stated in this Restatement’. This definition (as well as comment (d) to the the definition) has been cited with approval in Linda Sanders v Acclaim[140], holding that ‘thoughts, ideas, and expressive content are not products as contemplated by the strict liability doctrine’. Although again, both First Amendment and proximate causation issues were inherent in the reasoning.[141]

Nevertheless, reference within that definition which includes items which in their distribution and use are analogous to tangible personal property may be seen as keeping the door open as far as the characterization of ‘off the shelf’ software as a product.

Another criterion used by US courts in differentiating between products and services is ownership.[142] The property rights inhering in intangible property (such as a licence), have often not been considered to be a product.[143] Nevertheless, since services cannot be owned, ‘a computer program - at the point at which it is created and can be owned – can be considered a product’.[144] This also addresses issues in relation to the mass production of computer programs – mass production being an issue to which strict product liability was aimed.

A further distinction between products and services ‘is based on the possibility of correcting future defects’.[145] Services once performed cannot be the subject of repair to fix defects, whereas products such as machines are capable of repair.[146] Brannigan makes the comparison between repairing a machine and debugging a computer program.[147]

MES may comprise hardware and software, or may consist of a software program which is independent of its media. The system may also ‘plug into’ online databases which are separate from the hardware and software elements. In the context of the TPA, there is little real doubt, in the author's view, that system hardware,[148]

and that hardware and software together[149] would be a good. Software on its own, and the inputs from online databases (electrical impulses), however, may not. Accepting the possibility that some MES (at least where supplied on a hardware medium) may constitute goods for the purpose of the TPA, how might the product liability rules apply?

3.3 Goods having a defect

Under s75AD, for liability to arise, the goods must ‘have a defect’.[150] The meaning of this expression is set out in section 75AC(1) which provides that goods have a defect ‘if their safety is not such as persons generally are entitled to expect’.[151]

This definition has been criticized as failing to set an objective benchmark.[152] Hammond states that the ‘root problem of the defect requirement is that it does not create an objective standard of liability’.[153] While this may not be significant in relation to ‘manufacturing defects’, where a product may be assessed by reference to the specification of other non-defective products, the problem is of critical significance in relation to ‘design defects’.[154] The test in s.75AC(1) recognizes that there are degrees of safety and that a good does not have to be absolutely safe.[155 ] However, ‘like the concept of defect, safety is a relative concept, and requires an objective standard against which it may be measured’.[156]

In the context of MES this is significant because, as outlined in Part 2 of this paper, design choices may be a significant cause of misdiagnosis by a medical expert system.

Hammond concluded that this failure means, in relation to Part VA, that it imposes no stricter liability when applied to design defects than that imposed by negligence.[157] In other words, Part VA can only be meaningfully interpreted, in relation to conscious design defects, as requiring a cost benefit analysis in which alternative designs must be compared to the impugned design.[158]

Stapleton,[159] however, does not see the circularity of the definition of defect in s75AC(1) as being fatal to the coherence of a legal rule, and points to similar indeterminacy in Lord Atkins’ highly effective ‘neighbour principle’ in negligence.[160] In both the European Directive[161] (upon which Part VA was based) and US law, product liability rules do not impose a strict liability regime in all circumstances. Stapleton states that for strict liability to operate, two conditions must be satisfied: liability must be judged by reference to the product (rather than the reasonableness of the manufacturer’s conduct); and that there be no express defence triggered by the conduct of the manufacturer.[162]

In the USA, until 1998, the product liability rule was set out in §402A of the Restatement (Second) of Tort[163] as ‘one who sells any product in a defective condition unreasonably dangerous to the user or consumer …is subject to liability for physical harm thereby caused …’. The two tests which developed in US case law to deal with design defects were the ‘consumer expectation’ test and the risk-utility test.[164] The ‘consumer expectation’ test which used the expectations of ordinary consumers as a benchmark was found to deal inadequately with complex designs.[165] The risk-utility test, which gradually came to replace the consumer expectations test in a number of States,[166 ]required a balancing of accident costs associated with a product against its benefits.[167 ]

In 1998, the Third Restatement imposed liability for harm on one ‘who sells or distributes a defective product’.[168] The Third Restatement now states explicitly that a product is defective if it has manufacturing or design defects or has inadequate instructions or warnings, and states that a product is defective in design:

When the foreseeable risks of harm posed by the product could have been reduced or avoided by the adoption of a reasonable alternative design…and the omission of the alternative design renders the product not reasonably safe.[169]

Similarly under the European Directive, (and by analogy Part VA), Stapleton argues that the combined effect of the defect provision and the development risk defence[170] means that manufacturers will always escape liability for conscious design[171] choices if they have exercised reasonable care.[172]

The argument has a number of elements. First, the issue of defectiveness turns on the relative costs and benefits of the product measured against a broad community standard[173] (‘persons generally’)[174]. Second, the design is the ‘crystallization of the manufacturer’s behaviour in making and supplying the product’, and ‘judging the costs and benefits of the product supplied by the defendant after this behaviour is equivalent to judging the behaviour in making…the product’.[175 ] Whether the liability imposed for design defects is strict depends on the time at which the costs and benefits are evaluated.[176] Benefits of a product are to be assessed at the time of supply.[177]

If the costs are also judged at the time of supply, then only reasonably foreseeable risks (reasonably discoverable defects) should be taken into account in judging liability, thus importing no more than a negligence standard.[178] ‘Only if costs not reasonably discoverable can also be brought into the account against a product will a defendant…be exposed by the product rules to a liability…more extensive than negligence’.[179]

The TPA provides a defence where it is established that the ‘state of scientific or technical knowledge at the time when they [the goods] were supplied by their actual manufacturer was not such as to enable that defect to be discovered’.[180] A wide view of this defence requires that the state of such knowledge at that time would not allow such defects to be reasonably discoverable.[181] A narrow view would imply that the state of such knowledge at that time would not allow such defects to be discoverable at all.[182] In a 1997 case,[183] the European Court of Justice handed down a decision against the European Commission, which had sought a declaration, in effect, that the wide interpretation of the parallel European defence provision was inconsistent with the Declaration.[184] (Subsequently in 2000 in Australia, in Graham Barclay Oysters Pty Ltd v Ryan,[185]

Justice Lindgren observed, obiter dicta, that s75AK(1)(c) could be construed as importing a modified notion of reasonableness or practicability which might require an assessment of the economic costs of testing and the turnaround time for the results of a test.[186] On this basis, reasonable care in discovering a defect would be a defence to a claim that a design choice is defective, and therefore liability is not strict.[187]

In the context of MES, this line of reasoning would require that the safety of such a system be ascertained by reference to a cost benefit or (risk-utility) analysis, where the alleged defect arises from a conscious design choice. In relation to manufacturing errors, however, the liability would be strict.[188] Such an analysis may involve any of the factors set out in the inclusive list at section 75AC(2), at the relevant time of supply.

If Australian courts were to adopt the requirement of proof of ‘reasonable alternative design’ it would necessarily involve the side by side comparison of two complex systems. In doing this, Australian courts might quarantine aspects of the design of the whole system, and compare like elements, but it is arguable that the functioning of the system as a whole might not be predictable from the functioning of its parts, and failure of the system could also involve the synergistic operation of both manufacturing and design defects.

Such a process of comparison of individual design elements may nevertheless be what the courts have to do. Clearly the elements to be compared would be those impugned as having been the proximate cause of the injury. As outlined above[189] the CPGs, the selection of inference engine, the programming of the software, the graphical representation of the user interface may be impugned. In some circumstances, alternative software programming could be developed from the systems analysts design and compared as a RAD. Different systems analyses could be compared in assessing the extent to which they might have need to talk about EBM and what it is evidence of – and competing CPGs.

As noted above,[190] however, medical expert systems are complex, there may be inherent difficulties in categorizing software failures as manufacturing or design defects and the extent to which such software is modified after supply complicates the determination of the time at which a defect might have come into being.

How would a court deal with a situation where as a matter of fact,[191] it could not determine the origin of the failure, or determined that it arose from a combination of design and manufacturing errors, or couldn’t characterize a particular defect as a manufacturing or design defect. Ultimately the court might resort to the application of ‘res ipsa loquitur’ as an evidentiary device, resulting perhaps in a convergence of the cost/benefit and strict approaches. It might lead Australian courts to adopt a consumer expectation test (or a ‘persons generally’ expectation test) as is the case in a minority of US jurisdictions[192] where failure of a product to meet consumer expectations ‘suffices in and of itself to establish liability in cases predicated on design defect’.[193]

3.4 Entitled to Expect…

While consumer expectations may not be viewed as an appropriate mechanism for the setting of objective standards under the Restatement, the TPA does refer to the safety that persons generally are entitled to expect.[194] It also seems that the factors set out in s75AC(2) should be considered not only in determining the safety of a good but whether persons generally should have entitlement to expectations of safety. It is not clear from the words of that section that it does not also serve that purpose.

It is probably true to say that where a product has been made in accordance with a mandatory safety standard, that persons generally would be entitled to expect that they would not cause injury. It would be a curious reading of the plain words of the Act, if the expectation that a person was entitled to have about the safety of a medical device was not in part based on its compliance with mandatory Commonwealth standards.

While compliance with a mandatory Commonwealth standard is not listed as a relevant circumstance to which regard must be had under s75AC(2) for determining the safety of a good, the application of such standards to goods is clearly contemplated by Part VA.[195] This provides, negatively, that an inference that goods have a defect is not to be made only because of compliance with a standard which could have been safer.

Given requirements of the TGA and the incorporation of a risk-utility test in the Essential Principles under that Act, the registration of a medical expert system under the TGA should be sufficient to entitle an expectation of a relatively high level of safety, at least in respect of the design features of the system.

3.5 Mediation and safety

The mode of operation of medical expert systems, for the most part, contemplates that the medical expert system is an aid to diagnosis, its effect mediated by a trained professional. In addition to the issue of causation, professional mediation is relevant in determining the actual safety of the goods under the TPA.

Of the list of factors set out in s75AC(2), the most immediately relevant to medical expert systems appears to be s75AC(2)(e) – ‘what might reasonably be expected to be done with or in relation to [the goods].’ Presumably this would also require an analysis of ‘the purposes for which the [the goods] have been marketed’.[196] It is to be supposed in relation to medical expert systems that they will have been marketed to practitioners (such as in the GE advertisement during operation Iraqi Freedom referred to above) to help practitioners with diagnostic assistance in the performance of their professional duties. It is also to be assumed that a manufacturer would assert that such systems are not intended to displace the professional judgment of the practitioner.

Nevertheless, as a matter of practice, the medical expert system may disintermediate the practitioner to the point where ‘it might well be reasonably expected’ that the system would be used to provide advice for use by a patient without any real mediation by a practitioner. Such disintermediation may well arise through a number of mechanisms. Representations to the practitioner about the extent to which the system can reliably replace the diagnostic process; the domain of the system – those dealing with emergency procedures where there is no real opportunity for second guessing the system outputs, contractual arrangements under which the doctors are required to use and confine themselves to the outcomes of particular systems. It may also occur as the result of representations directly to the patient.[197]. In some cases, the practitioner would have no reason or capacity to check the accuracy of particular databases.[198]

Notwithstanding the professional responsibility of a practitioner to mediate the technology, as a matter of practice she may not. It would seem reasonable to expect therefore that in at least some uses of the goods, the diagnosis produced by such systems would be relied on directly by the practitioner. Products must be designed so that they are safe if used in other reasonably expected ways.[199] A manufacturer may need to take account of a person’s unintended use, reuse, misuse or abuse of its products[200] particularly where the plaintiff is injured through another’s reasonably expected use over which the injured party may have no control.

3.6 Defences

It is a defence, under section 75AK if it is established that the defect did not exist at the time of supply.[201] It has already been noted that there are a number of circumstances relevant to medical expert systems which complicate the interpretation of this ground of defence. Firstly in dealing with software, the final software program may still be the subject of rectification even after supply, for fine tuning and the removal of bugs. It may be very hard to determine whether a defect in a program is one which did not originate as a result of modifications made after supply.

Some of the software used in these systems may also modify its own operation on the basis of experience. Such programs may ‘learn’. The question then becomes one of whether the changes caused by this feature of the design were at the heart of any defect and therefore whether this feature itself may be said to be a defect.

Finally, these systems must be updated regularly to take account of changes arising through outcomes management at the heart of EBM. A manufacturer may be able to establish that the defect is the result of a change to a particular CPG as the result of some flaw in the updated EBM input. This may raise sharply the question of whether such new information could be characterised a defective product or merely a source of defect in an existing product.

Another defence provided by section 75AK is that the state of scientific or technical knowledge at the time when the goods were supplied was not such as to enable the defect to be discovered. As noted above,[202] the width of this defence is the subject of some difference of opinion. In Australia, it would appear that there is some authority that the wider view may prevail.[203]

Clearly, this defence is only readily understandable in relation to design defects. Again the difficulty in characterising some defects as design or manufacturing defects in connection with software is problematic. Nevertheless, the main context in which this defence is likely to be debated is whether the CPGs or indeed the EBM underpinning them is to be regarded as determinative of best practice – or simply an acceptable minority opinion.

The debate over the probative value of CPGs or their status, keeps the door open on the meaning of the expression ‘such as to enable that defect to be discovered’. The defence does not refer to a state of knowledge which is sufficient for a design choice to be preferred, simply for the existence of a defect to be discovered.

We have seen that a medical expert system may be mass produced or custom made. It may be supplied as a product consisting of hardware and software – or it may be supplied simply as software through an online transaction. The system may do no more than provide access to a database or a knowledge base, or it maybe designed to replicate the diagnostic decision making process of a practitioner. The system in its design may enable a practitioner to exercise professional judgment at each step, or it may operate in a non-transparent fashion requiring a high level of reliance, in which aspects of diagnosis or treatment are dictated by cost benefit. Its domain may be limited to illnesses of a chronic nature, or it may be intended for use in emergency settings.

These factors may determine in a given case that the system is or is not a good for the purposes of Part VA TPA. These factors may also determine whether the system had a defect at the time of supply, and whether the defect was a design defect or a manufacturing defect. They may also be relevant in determining the extent to which the proximate cause of the injury was the good, or the mediation by a professional.

The fact of professional mediation impacts both on the definition of safety under Part VA, as well as on the issue of causation. This last factor might be overcome where it was clear that the professional was forced into reliance such as where the practitioner was contractually obliged to adhere to certain treatment regimes, or the opportunity for applying professional judgment was restricted such as the system being the only source of technical data. The absence of free speech concerns in an Australian context might also assist in showing that the information output of the system was a proximate cause of injury.

On balance, it would appear that except perhaps in the case of obvious manufacturing defects in relation to the hardware of an MES, there is a range of obstacles to the effective application of Part VA to MES. A manufacturer may still be guilty of an offence, however, if an MES fails to meet the standards of safety set out in the Therapeutic Goods Amendment (Medical Devices) Act 2002 (Cth). The next Part of this paper examines the application of the TGA and compares some of tests under Part VA and the TGA. This paper takes the view that the TGA provides a more favourable regime for the innovation sought in the Taskforce Report.

4. Deterrence, Loss Spreading, Innovation and the TGA

The stated aim of this paper was to examine whether the goal stated in the Task Force Report, a commitment to the adoption of electronic decision support in Australia, was consistent with the product liability regime under Part VA TPA. The policy objectives of Part VA are not expressly stated such as may be done through an objects clause, although extrinsic materials suggest that consistency with the European regime, economic efficiency and justice were ultimately the goals of the legislation. Loss spreading and proper pricing may achieve economic efficiency (consistent with product liability rules imposing strict liability).[204] Justice may require both compensation and the deterrence of defective design.[205]

There is however a tension between the goal of loss spreading (allowing manufacturers to apply proper pricing and insurance) and the goal of deterrence.[206] This tension may be addressed by separate instruments to achieve each goal.[207 ] While it is doubtful that Part VA would provide an effective vehicle for recovery by those injured by the defective design of MES, deterrence of defective design has arguably been achieved, in the context of MES, through the amendments to the TGA dealing with medical devices.

The Therapeutic Goods Amendment (Medical Devices) Act 2002 (Cth) introduced a new regime for the regulation of medical devices. The TGA, as amended[208] makes it an offence to supply a medical device which does not comply with the ‘essential principles’ set out in the regulations.

The new regime establishes a device classification structure which differentiates between different levels of risk, intended use and degree of invasiveness of the human body.[209 ] It sets out essential principles for quality safety and performance which must be achieved before medical devices may be supplied.[210] The TGA also provides for the use of recognised standards to satisfy the requirements of the essential principles.[211] The classification scheme determines the ‘conformity assessment procedures’ a manufacturer may use to have the device assessed to conform to the requirements for the class of device.[212] The role of the TGA is to certify that the conformity assessment procedures selected by the manufacturer are appropriate and have been applied.[213]

4.1 Is an MES a medical device?

While Part VA and, prior to 2002, the TGA both dealt with ‘goods’, the 2002 amendments to the TGA established a stand alone definition of medical devices which do not characterize them as goods. The definition also clearly contemplates that a medical device may include software and may be part of a system in conjunction with other devices. The definition of medical device includes:

…any instrument, apparatus, appliance, material or other article (whether used alone or in combination, and including the software necessary for its proper application) intended, …for the purpose of one or more of the following:

1) diagnosis, prevention, monitoring, treatment or alleviation of disease;

and that does not achieve its principal intended action in or on the human body by pharmacological, immunological or metabolic means.

As an MES, including its software, is an appliance intended for the purpose of diagnosis (and possibly monitoring) that does not achieve its principal intended action through the means listed, the definition would seem to apply to MES. The definition overcomes the interpretational difficulties discussed above in relation to whether a MES is a good.

4.2 The essential principles

The ‘essential principles’ are set out at Schedule 1 of the Therapeutic Goods (Medical Devices) Regulations 2002 (Cth). They require that a device be designed and produced in a way which does not compromise the safety of a patient under the conditions and for the purposes for which the device was intended. and, if applicable, ‘by a user with appropriate technical knowledge, experience, education or training’.[214]

The essential principles also relevantly require: that any risks associated with the use of the device are acceptable when weighed against the intended benefit to the patient;[215] ‘the solutions adopted by the manufacturer for the design and construction of a medical device must conform with safety principles, having regard to the generally acknowledged state of the art’; and that the ‘benefits to be gained from the use of the medical device for the performance intended by the manufacturer must outweigh any undesirable side effects arising from its use.’[216]

The wording of these principles seems to be directed at many of the issues discussed above in relation to Part VA.

Firstly, Essential Principle 1 makes it quite clear that the patient’s wellbeing is contemplated in addition to any user of the device (eliminating privity considerations). It also makes it clear that the device’s safety relates to its use ‘under the conditions’ and ‘for the purposes’ intended by the manufacturer. The principle specifically refers to the circumstance that the device may be operated by a professional. This wording achieves two results. Diagnostic systems will be judged on their potential to injure through the provision of defective information (‘purpose intended’), but also allow a manufacturer to manage their liability through specifications circumscribing the intended purpose. It also makes it clear that the safety of a device will be judged in relation to the level of intermediation contemplated by the manufacturer.

There are two Essential Principles (2 and 6) which require a risk-utility analysis. Principle 2 requires such an analysis in respect of ‘the patient’ and Principle 6 which is expressed in more general terms.[217] It is not easy to interpret the requirement of principle 6 unless it relates to ‘persons in the position of a patient’. If that is the case, then it may be that the more general risk-utility requirement in Principle 6 might relate, for example in the case of MES, to issues such as cost-effectiveness rules built into the CPGs. In any event, the provision makes it quite plain that the design of a medical device will be assessed in part on a risk-utility basis. Presumably such an analysis could admit the trade off between what is best for all patients against what is best for most patients, a trade off which to some extent underpins EBM.

Finally Essential Principle 2(a) requires that a medical device must conform with safety principles, ‘having regard to the generally acknowledged state of the art’. This requirement in effect mirrors the ‘wide interpretation’ of the defence in s75AK of Part VA. It does not require manufacturers to demonstrate that the device contains no defect which would be identified through any enquiry, but rather those which would be discoverable from the generally acknowledged state of the art.

Where a standard has been promulgated under the TGA in relation to a particular device, compliance with that standard in respect of a particular matter, will be taken as compliance with the Essential Principle to which it relates.[218] Where standards are promulgated in relation to MES, this would also tend to give manufacturers more certainty than under Part VA.

In summary, the TGA provides a regime which will act as a deterrent to defective design for MES. The provisions of the TGA and its Essential Principles are drafted in such a way that there is more certainty in its operation, and that shield manufacturers more than Part VA.

5. Conclusion

Australia is committed to the adoption of electronic decision support as a vehicle for the reduction of medical error. The extent to which this commitment will be successful depends in part on the commitment by manufacturers to develop these systems, their uptake by the medical profession and their acceptance by the community. One factor in each of these is the extent to which defects in these systems may result in injuries to those who rely on them, and any consequent liabilities.

In Australia, two important relevant sources of law are Part VA TPA and the TGA. The former sets out product liability rules making manufacturers liable for injuries caused by defects and providing for recovery for the injured party. The latter makes it an offence for the supply of medical devices which do not meet essential safety principles.

For a number of historic and interpretational reasons, Part VA is unlikely to provide an effective cause of action in relation to injuries associated with MES. This is because of difficulties including definition of ‘goods’, the characterisation of software defects as manufacturing or design defect, difficulties in determining whether a defect arose before or after supply, and the issue of whether reliance on information (mediated or otherwise) can itself be said to be a proximate cause of injury. Were Part VA to have operation, it is unlikely to be such as to impose strict liability on the manufacturers of MES for design defects.

To the extent that Part VA is unlikely to have any clear or extensive application to medical expert systems, or impose strict liability on design defects, it seems unlikely that it will have any significant negative impact on the manufacture and development of such systems

The failure of Part VA to provide a deterrent to defective design through recovery by injured parties may well be offset by the new regime for regulating medical devices under the TGA. The relevant provisions of the TGA overcome a number of interpretational difficulties such as affect Part VA TPA, and allow manufacturers to be assessed on the operation of the devices on the basis of the purpose for which and the conditions under which the devices were intended to operate.

In consequence, it is the view of this paper that the legislative regimes relevant to liability for defective MES are consistent with the encouragement of their development and uptake within Australia.

BIBLIOGRAPHY

American Law Institute, ‘Restatement (Third) of Torts: Product Liability’, 1998.
Roy W Arnold, ‘Note: The persistence of caveat emptor: Publisher immunity from liability for inaccurate factual information’, (1992), 53 U Pitt. L. Rev. 777.
Commonwealth of Australia, National Electronic Decision Support Taskforce, ‘Electronic Decision Support for Australia’s Health Sector’, Nov 2002.
Commonwealth of Australia, ‘Australian Medical Device Guidelines: An Overview of the New Medical Devices Regulatory System: Guidance Document 1.’

.

Donald Ballman, ‘Commentary: Software Tort: Evaluating Software Harm by Duty of Function and Form’, (1996-1997), 3 Conn Ins L. J. 417, accessed through LEXIS.

Robert Bradgate, ‘Beyond the Millennium - The Legal Issues: Sale of Goods Issues and the Millennium Bug’, 1999 (2) The Journal of Information, Law and Technology (JILT).

Vincent M Brannigan and Ruth E Dayhoff, ‘Liability for Personal Injuries Caused by Defective Medical Computer Programs’, 1981, 7 Am. J. L. and Med. 123.

Lowell Brown and Joan Procopio, “Sailing through uncharted waters: outcomes measurement, practice guidelines and the law”, (1995) 16 Whittier L. Rev. 1021.

GTN Burger, AM Van Ginneken, H Hollema, ‘Computer based Diagnostic Support Systems in Histopathology: What Should They Do?’, 2001:

http://adams.mgh.harvard.edu/PDF_Repository/1005_BURGER. PDF

Bob Carlson, ‘Technology Offers an Answer to “Information Overload”’, 1996, Managed Care:

http://www.managedcaremag.com/archives/9612/MC9612.clinicalsoft.shtml

Enrico Coiera, ‘Artificial Intelligence in Medicine’, taken from Chapter 19 of ‘The Guide to Medical Informatics, the Internet and Telemedicine’, 1997:

http://www.coiera.com/aimd.htm

Lisa Dahm, ‘Restatement (Second) of Torts section 324A: An innovative theory of recovery for patients injured through the use or misuse of health care information systems’, (1995) 14 J. Marshall J Computer & Info L 73.

Ian Dallen, ‘Can Defendants Rely on “State of the Art” Defence in Part VA’, Corrs Chambers Westgarth at:

http://www.corrs.com.au/ WebStreamer?page_id=2297

Gail E Evans and Brian F Fitzgerald, ‘Information Transactions under UCC Article 2b: Ascendancy of Freedom of Contract in the Digital Millenium’, [1998] UNSWLJ 56, at:

http://www.austlii.edu.au/au/ journals/UTSLR/1998/46.html

Jonathan K Gable, ‘An Overview of the Legal Liabilities Facing Manufacturers of Medical Information Systems’, (2001) 5 Quinnipiac Health L.J. 127.

Marnie Hammond, ‘The defect test in Part VA of the Trade Practices Act 1974(Cth): defectively designed?’, (1998) 6 TLJ 29.

Brian Hurwitz, Clinical Guidelines and the Law – Negligence, Discretion and the Law, Radcliff Medical Press, 1998.

Stephen S Hyde, ‘The Last Priesthood: The Coming Revolution in Medical Care Delivery’, Cato Regulation:

http://www.cato.org/pubs/ regulation/reg15n4h.html

Jocelyn Kellam and Bettina Arste, ‘Current Trends and Future Directions in Product Liability in Australia’, (2000) Wm Mitchell L Rev 141.

Brian H Lamkin, ‘Medical Expert Systems and Publisher Liability: a Cross Contextual Analysis’, (1994) 43 Emory L. J. 731.

Daniel McNeel Lane Jr, ‘Publisher Liability for Material that Invites Reliance’, (1988), 66 Tex L. Rev. 1155.

David W Lannetti, ‘Toward a revised definition of “product” under the Restatement (Third) of Torts:Product Liability’, (2000) 55 Bus Law 799.

Nathan D Leadstrom, ‘Internet Web Sites As Products Under Strict Liability: A Call for An Expanded Definition of Product’, (2001) 40 Washburn L. J. 532.

Teng Liaw, ‘Decision support in clinical practice’, Ch. 14 Health Informatics:

http://infocom.cqu.edu.au/HNI/BooksOnline/chapter_ 14.pdf

Jonathan Mintz, ‘Strict Liability for Commercial Intellect’, 41 Cath.U.L.Rev. 617, (1992).

Brian Monnich, ‘Bringing Order to Cybermedicine: Applying the Corporate Practice of Cybermedicine Doctrine to Tame the Wild Wild Web’ (2001) 42 B.C.L. Rev 455

Mark A Musen, Yuval Shahar, Edward H Shortliffe, Clinical Decision Support Systems, Chapter 16 . at:

www.ie.bgu.ac.il/mdss/ch16-finalpdf

Frank D Nguyen, ‘Regulation of Medical Expert Systems: A Necessary Evil?’, (1994) 34 Santa Clara L. Rev. 1187.

Lars Noah, ‘Publishers, and Product Liability: Remedies for Defective Information in Books’, (1998) 77 Of. L. Rev. 1195.

Lars Noah, ‘Medicine’s Epistemology: Mapping the Haphazard Diffusion of Knowledge in the Biomedical Community’, (2002) 44 Ariz. Rev. 373.

Daniel T Perlman ‘Who Pays the Price of Computer Software Failure’, (1998) 24 Rutgers Computer & Tech. L. J. 383.

Nancy Plant, ‘The Leaned Intermediary Doctrine: Some New Medicine for an Old Ailment’, (1996) 81 Iowa L. Rev. 1007, accessed through LEXIS.

Timothy A Pratt and John F Kuckelman, ‘The Learned Intermediary Doctrine and Direct to Consumer Advertising for Prescription Drugs’:

http://www.thefederation.org/public/Quarterly/Fall000/pratt.htm

Megan Richardson, ‘Towards a test for strict liability in tort: a modified proposal for Australian product liability’ (1996) 4 TLJ 23.

Arnold Rosoff, ‘Evidence Based Medicine and the Law: The Courts Confront Clinical Practice Guidelines’, Journal of Health Politics, Policy and Law, April 2001.

Arnold Rosoff, ‘The role of clinical practice guidelines in health care reform’, Summer 1995, 5 Health Matrix 369.

Arnold J Rosoff, ‘Symposium: On being a physician in the electronic age: peering into the mists at point-&-click medicine’, (2002) 46 St Louis L.J. 111.

Megan L Scheetz, ‘Note: Toward Controlled Clinical Practice Guidelines: the Legal Liability for Developers and Issuers of Clinical Pathways’, (1997), 63 Brooklyn L. Rev 1341.

Ruth Ellen Smalley, ‘Will a Lawsuit a Day Keep the Cyberdocs Away’, (2001) 7 Rich J.L. & Tech. 29.

Jane Stapleton, ‘Comments: The conceptual imprecision of “strict” product liability’, (1998) 6 TLJ 19.

Jane Stapleton, ‘International Torts: A Comparative Study: Restatment (Third) of Torts:Products Liability, and Anglo Australian Perspective’, (2000), 39 Washburn L.J. 363.

Nicolas P Terry, ‘Cyber-Malpractice: Legal Exposure of Cybermedicine’, (1999) 25 Am. J. L. and Med. 327.

Nicolas P Terry, ‘When the “Machine That Goes “Ping”’ Causes Harm: Default Torts Rules and Technologically–Mediated Health Care Injuries’, (2002) 46 St Louis U.L.J. 27, copy on file with author.

Nicolas P Terry, ‘An eHealth Diptych: The Impact of Privacy Regulation on Medical Error and Malpractice Litigation’, 27 Am J L. and Med 361, copy on file with author.


* Paul Hynes is Special Counsel with national law firm Hunt & Hunt, and is Adjunct Professor of e-Commerce Law, Commercial Law and e-Government Law at the University of Canberra.

[1] Commonwealth of Australia, National Electronic Decision Support Taskforce, ‘Electronic Decision Support for Australia’s Health Sector’, Nov 2002 (‘The Taskforce Report’).

[2] Taskforce Report, above n.1, at p 16.

[3] Taskforce Report, above n.1, at p 10.

[4] Taskforce Report, above n.1, at p.1.

[5] Nicolas P Terry, ‘An eHealth Diptych: The Impact of Privacy Regulation on Medical Error on Malpractice Litigation’, 27 Am J L. and Med 361, at pp.374 – 377.

[6] Arnold J Rosoff, ‘Symposium: On being a physician in the electronic age: peering into the mists at point-&-click medicine’, (2002) 46 St Louis L.J. 111.

[7] Megan L Scheetz, ‘Note: Toward Controlled Clinical Practice Guidelines: the Legal Liability for Developers and Issuers of Clinical Pathways’, (1997) 63 Brooklyn L. Rev 1341; Arnold Rosoff, ‘The role of clinical practice guidelines in health care reform’, (1995) 5 Health Matrix 369, at pp.374 -377.

[8] Taskforce Report, n.1 supra, at p 61, referring to a paper prepared for the Department of Health and Aged Care by B Milstein, ‘Legal Issues in General Practice and Computerisation’.

[9] The technologies are described in greater detail in the text accompanying notes 23ff infra. This paper will be focussing on systems which assist in the diagnosis of medical conditions. Such systems are commonly referred to as Medical Expert Systems (MES) or Clinical Diagnosis Support Systems (CDSS). In this paper reference will be made generally to MES.

[10] See generally Nicolas P Terry, ‘When the “Machine That Goes “Ping” Causes Harm: Default Torts Rules and Technologically–Mediated Health Care Injuries’, 46 St Louis U.L.J. 27 (2002).

[11] Jonathan K Gable, ‘An Overview of the Legal Liabilities Facing Manufacturers of Medical Information Systems’, 2001, 5 Quinnipiac Health L.J. 127, at pp.148 -149; Brian H Lamkin, ‘Comments: Medical Expert Systems and Publisher Liability: a Cross Contextual Analysis’, (1994) 43 Emory L. J. 731, at pp754 – 756.

[12] Lamkin, above n.11 at pp.736 – 740; Gable, above n.11, at p 140; Frank D Nguyen, ‘Regulation of Medical Expert Systems: A Necessary Evil?’, (1994) 34 Santa Clara L. Rev. 1187, at p.1195.

[13] Lamkin, above n.11 at p.740; Gable, above n.11, at pp.145 – 146; Nguyen, n.12 supra at p.1198.

[14] See generally: Nathan D Leadstrom, ‘Internet Web Sites As Products Under Strict Liability: A Call for An Expanded Definition of Product’, (2001) 40 Washburn L. J. 532, Daniel McNeel Lane Jr, ‘Publisher Liability for Material that Invites Reliance’, (1988) 66 Tex L. Rev. 1155, Jonathan Mintz, ‘Strict Liability for Commercial Intellect”, (1992) 41 Cath.U.L.Rev. 617, Lars Noah, ‘Authors, Publishers, and Product Liability: Remedies for Defective Information in Books’, (1998) 77 Of. L. Rev. 1195, Roy W Arnold, ‘Note: The persistence of caveat emptor: Publisher immunity from liability for inaccurate factual information’, (1992) 53 U Pitt. L. Rev. 777.

[15] Winter v GP Putnam’s Sons, 938F.2d 1033 (9th Cir.1991).

[16] Birmingham v Fodor’s Travel Publications, Inc, 833 P.2d. 70 (Hawaii 1992). The High Court's recent decision in Swain v Waverly Municipal Council [2005] HCA 4 (9 February 2005) stands in contrast to the decision in Birmingham. In Swain, a man was seriously injured when he dived into waves on part of a beach between flags used by the Council to indicate a safe swimming area. In effect, the decision of the High Court suggests that the fact that the information was intended to be relied on created a relationship between the injured party and the Council, sufficient to sustain liability.

[17] Jones v J B Lippincot Co, 694 F.Supp 1216 (D.Md. 1988).

[18] See note above n.14.

[19] Appleby v Miller, 554 NE 2d 773; 1990.

[20] Mintz above n.14 at pp620 to 621.Noah, above n.14 at p1208; Arnold, above n.14 at 799 to 802.

[21] See Nancy Plant, ‘The Learned Intermediary Doctrine: Some New Medicine for an Old Ailment’, (1996) 81 Iowa L. Rev. 1007, Timothy A Pratt and John F Kuckelman, ‘The Learned Intermediary Doctrine and Direct to Consumer Advertising of Prescription Drugs’:

http://www.thefederation.org/ public/Quarterly/Fall000/pratt.htm

[22] Brian Hurwitz, Clinical Guidelines and the Law – Negligence, Discretion and the Law, Radcliff Medical Press, 1998, at 67 – 68; Scheetz, above n.7, at p.1354.

[23] Taskforce Report above n.1 at p 1.

[24] Mark A Musen, Yuval Shahar, Edward H Shortliffe, Clinical Decision Support Systems, Chapter 16 . at www.ie.bgu.ac.il/mdss/ch16-finalpdf at p.602.

[25] Musen supra at n.24 at p.602; see also Jonathan K Gable, ‘An Overview of the Legal Liabilities Facing Manufacturers of Medical Information Systems’, (2001) 5 Quinnipiac Health L.J. 127, at p.135.

[26] Ibid.

[27] Musen supra at n.24 at p.602, Enrico Coiera, ‘Artificial Intelligence in Medicine’, taken from Chapter 19 of The Guide to Medical Informatics, the Internet and Telemedicine, 1997 , http://www.coiera.com/aimd.htm at p.3.

[28] Coiera above n.27 at p.4.

[29] Id.

[30] Musen above n.24, at 602; Coiera above n.27 at p3.

[31] Nguyen, above n.12.

[32] Lamkin, above n.11 p.733 - 734.

[33] Musen above n.24, at 602.

[34] DXplain is a clinical support system developed at Massachusetts General Hospital – it takes a set of clinical findings including signs, symptoms, laboratory data – and produces a ranked list of diagnoses, providing justification for each and suggests further investigations – see Coiera above n.27 at p.4.

[35] Lamkin above n.11 at p.734.

[36] See for example GTN Burger, AM Van Ginneken, H Hollema, ‘Computer based Diagnostic Support Systems in Histopathology: What Should They Do?’, 2001 , http://adams.mgh.harvard.edu/PDF_Repository/1005_ BURGER.PDF

[37] Lamkin above n.11 at p.735.

[38] Teng Liaw, “Decision support in clinical practice’, Ch. 14 Health Informatics, http://infocom.cqu.edu.au/HNI/BooksOnline/chapter_14.pdf

(2), Nguyen above n.12 at p.1191; Lamkin above n.11 at p.735.

[39] Teng Liaw, above n.38 at (2).

[40] Ibid.

[41] Ibid.

[42] Lamkin above n.11 at p.735.

[43] Rosoff, above n.5, at (3).

[44] Ibid.

[45] Teng Liaw, above n.38 at (3).

[46] Ibid.

[47] Ibid.

[48] Ibid.

[49] Teng Liaw above n.38 at (3).

[50] Id See also Musen, above n.24 at pp608–610.

[51] Rosoff, above n.6, at (3).

[52] Arnold Rosoff, ‘Article: The role of clinical practice guidelines in health care reform’, Summer 1995, 5 Health Matrix 369, at p.369.

[53] Scheetz, above n.7 at p.1348

[54] Rosoff, above n.6, at 4.

[55] Ibid.

[56] Scheetz, above n.7, at p.1350.

[57] Lowell Brown and Joan Procopio, ‘Sailing through uncharted waters: outcomes measurement, practice guidelines and the law’, (1995) 16 Whittier L. Rev. 1021, at p.1021; Scheetz, above n.7, at p.1350.

[58] Rosoff, above n.6, at 4.

[59] Ibid.

[60] See Arnold Rosoff, ‘Evidence Based Medicine and the Law: The Courts Confront Clinical Practice Guidelines’, (2001) Journal of Health Politics, Policy and Law, April 2001, http://www.dukepress.edu/jhppl/

[61] Rosoff, above n.6 at text accompanying n.12.

[62] Rosoff above n.52 at p.392, Rosoff, above n.6 at text accompanying notes 3 to 7.

[63] Vincent M Brannigan and Ruth E Dayhoff, ‘Liability for Personal Injuries Caused by Defective Medical Computer Programs’, (1981) 7 Am. J. L. and Med. 123, at p.125.

[64] Rosoff, above n.6 at text accompanying notes 3 to 7.

[65] Scheetz, above n.7, at p.1361.

[66] Ibid.

[67] Ibid.

[68] Ibid.

[69] Rosoff, above n.6 at text accompanying n.10, ff.-Rosoff discusses this in relation to the perceptions of practitioners who may distrust the CPGs for this reason.

[70] See generally Rosoff, above n.7 The evidentiary value of CPGs or EBM also being affected by the use to which the evidence is put, whether in prosecuting claims where there has been a failure to use the CPGs, or used as a defence where the CPGs have been followed. See also Hurwitz, above n.22 at pp36 – 50.

[71] Ibid.

[72] Brannigan, above n.63 at p.125.

[73] Ibid.

[74] Ibid.

[75] Ibid at p.126.

[76] Ibid.

[77] Ibid; Lisa Dahm, ‘Article: Restatement (Second) of Torts section 324A: An innovative theory of recovery for patients injured through the use or misuse of health care information systems’, (1995) 14 J. Marshall J Computer & Info L 73, at p89.

[78] Brannigan, above n.63 at p.126.

[79] Nguyen, above n.12, at p.1192.

[80] Ibid.

[81] Ibid.

[82] Ibid.

[83] Ibid.

[84] Brannigan, above n.63 at p.131.

[85] Nguyen above n.12, at p.1192.

[86] See below text accompanying notes 189ff.

[87] Lamkin above n.11 at p.742.

[88] Ibid.

[89] Marnie Hammond, “The defect test in Part VA of the Trade Practices Act 1974(Cth) defectively designed?’, (1998) 16 TLJ 29.

[90] Lamkin above n.11 at p.742; Brannigan, above n.63 at p.135.

[91] Brannigan, above n.63 at p.135; Lamkin, above n.11 at 743.

[92] Hammond, above n.89, at p.31; Brannigan, above n.63 at p.135.

[93] Ibid.

[94] Brannigan, above n.63 at p.135 – example of defective aeroplane door latches; Hammond, above n.89 at p.31, gives the the example of toys made of plastic which shatters.

[95] As noted by Hammond, above n.89, at p.31, a design defect may be a misnomer, as the design choice may be judged not to be a defect, when assessed by a court.

[96] Hammond, above n.89, at p.31.

[97] Lamkin above n.32 at p.743; Brannigan, above n.63 at p.136.

[98] Lamkin above n.11 at p.743; Brannigan, above n.63 at p.136 - 137.

[99] Where the program would be referred to as ‘bespoke’.

[100] 1992

[101] A search on Lexis and on Austlii databases showed approximately12 cases at the High Court or State Supreme Court exercising federal jurisdiction. See also Jane Stapleton, ‘The conceptual imprecision of “strict” product liability’, at p.9 (n17) (1998) 6 TLJ 260.

[102] The influence of the First Amendment has already been referred to. See generally the references referred to in above n.14.

[103] Other defences include relating to mandatory compliance with standards and ‘finished goods’ are set out at s75AK(1) (b) and (d) TPA.

[104] Section 75AN, TPA.

[105] s4 TPA.

[106] No. N G485 of 1989 FED No. 20 Trade Practices.

[107] Re:ASX Operations Pty Limited and Australian Stock Exchange Limited And: Pont Data Australia Pty Limited No. G344 of 1990 FED No. 710 Trade Practices (1991) ATPR para 41-069; 97 ALR 513; 19 IPR 323; 27 FCR 460.

[108] No. N G627 of 1990 FED No. 580 Trade Practices.

[109] para. 123

[110] para. 123 -125

[111] para. 125

[112] ASX Operations n.107above.

[113] (1983) 2 NSWLR 48.

[114] at p.54.

[115] Caslec above n.108, at para 34; Gail E Evans and Brian F Fitzgerald, ‘Information Transactions under UCC Article 2b: Ascendancy of Freedom of Contract in the Digital Millenium’, [1998] UNSWLJ 56, at http://www.austlii.edu.au/au/journals/UTSLR/1998/46.html>(n 30)

[116] Caslec above n.108, at para 35.

[117] [1997] Fleet Street Reports 351.

[118] Evans and Fitzgerald, above n.115, at (n 32).

[119] See for example Appleby v Miller above n.19, at p.776; Brannigan, above n.63 at p.130.

[120] Arnold, above n.14 at p.792; Noah, above n.14 at p.1218.

[121] See Mintz, above n.14.

[122] Some of which, at least in respect of contractual matters, may now have been overtaken by UCITA.

[123] David W Lannetti, ‘Toward a revised definition of “product” under the Restatement (Third) of Torts: Product Liability’, February 2000, 55 Bus Law 799, at (n 94).

[124] Lannetti, above n.123, at (n99).

[125] Ibid.

[126] Ibid.

[127] Mintz, above n.18, at p. 617.

[128] Ibid.

[129] Saloomey v Jeppeson & Co., [1983] USCA2 531; 707 F.2d 671, 1983US App. LEXIS 21800; Brocklesby v United States, [1985] USCA9 1290; 767 F.2d 1288 (9th Cir.1985); Fluor Corp. v Jeppeson & Co., 170 Cal. App. 3d 468.

[130] [1991] USCA9 605; 938 F2d 1033 (9th Cir 1991)

[131] at p.1035

[132] [1981] USCA9 479; 642 F2d 339 (9th Cir. 1981)

[133] Brannigan, above n.63 at p.132

[134] See above n.129.

[135] Fluor, above n.129.

[136] Fluor, above n.129 at p.476.

[137] Brannigan, above n.63 at p.131; Lannetti, above n.123, at (n.105).

[138] Brannigan, above n.63 at p.131.

[139] American Law Institute, ‘Restatement (Third) of Torts: Product Liability’, 1998, Comment §3.

[140] 188F Supp.2d 1264;2002 US Dist Lexis 3997 at 1279.

[141] The author could only find one Australian authority which resembled in any way the aeronautical charts cases. In ACCC v Hungry Jack’s Pty Ltd 1996 955 FCA 1, Hungry Jack’s was prosecuted successfully for failing to meet mandatory standards in relation to sunglasses which were being supplied as part of a promotion. The basis for the failure was that the glasses distorted the depth perception of the wearer. Had the cause of action been under Part VA, from an injury received in reliance on the information being passed through the glasses (rather than injury by the glasses), Australia might arguably have had its own precedent in this area.

[142] Brannigan, above n.63 at p.132.

[143] Ibid.

[144] Ibid.

[145] Brannigan, above n.63 at p.132.

[146] Ibid.

[147] Ibid.

[148] Robert Bradgate, ‘Beyond the Millennium - The Legal Issues: Sale of Goods Issues and the Millennium Bug’, 1999 (2) The Journal of Information, Law and Technology http://www.law.warwick.ac.uk/jilt/99-2/bradgate.html

[149] Bradgate, above n.148.

[150] s.75AD(2).

[151] s.75AC(1).

[152] Hammond, above n.89, at p.62; cf Stapleton, above n.101, at p.13.

[153] Hammond, above n.89, at p.62.

[154] Ibid.

[155] Hammond, above n.89, at p.62.

[156] Ibid.

[157] Ibid.

[158] Ibid.

[159] Stapleton, above n.101 at p.13.

[160] Ibid.

[161] European Directive 85/374/EEC.

[162] Stapleton, above n.101 at p.14.

[163] American Law Institute, Restatement of the Law, Second, Torts, 1965.

[164] American Law Institute, Restatement (Third) of Torts:Product Liability, 1998, Comment d to §1; Hammond, above n.89, at p.26; Stapleton, above n.101.

[165] Hammond, above n.89, at p.26.

[166] American Law Institute, Restatement (Third) of Torts:Product Liability, above n.164.

[167] Hammond, above n.89, at p.26; American Law Institute, Restatement (Third) of Torts:Product Liability, n.164 supra.

[168] American Law Institute, Restatement (Third) of Torts:Product Liability, above n.164, §1.

[169] American Law Institute, Restatement (Third) of Torts:Product Liability, above n.164, §2(b).

[170] TPA s75AK(1)(c).

[171] References to design are references to ‘in house’ design – a discussion of liability of upstream providers is beyond the scope of this paper.

[172] Stapleton, n.101 supra at p.20.

[173] Stapleton, n.101 supra at p.22 – ‘even in the context of…merchantability ...from which the special product rules evolved, the legal norm was not expectations…the norm was the objective minimum standard of what would sell in that particular market’.

[174] TPA s 75AC(1).

[175] Stapleton, above n.101 at pp 22-23.

[176] Stapleton, above n.101 at p23.

[177] See TPA s75AC(3) which stipulates that no inference of defect is to be made because after the time of supply, safer goods of the same kind were supplied.

[178] Stapleton, above n.101 at p25.

[179] Stapleton, above n.101 at p26.

[180] TPA s75AK(1)(c). The wording of the European Directive at Art 7(e) is in almost identical terms ‘that the state of scientific and technical knowledge at the time when he put the product into circulation was not such as to enable the existence of the defect to be discovered’.

[181] Stapleton, above n.101 at pp.30 – 32.

[182] Ibid.

[183] Case C-300/95 [1997] All ER (EC) 481.

[184] Stapleton, above n.101 at pp.30 – 38.

[185] [2000] FCA 1099 (9 August 2000) Accessed on Austlii at http://www.austlii.edu.au/au/cases/cth/federal_ct/2000/1099.html

[186] Ibid at para 549.

[187] Stapleton, above n.101 at p.38.

[188] Stapleton above n.101 at p.10.

[189] See text accompanying above notes 64ff.

[190] See text accompanying above n.97 and 98.

[191] See text accompanying above n.166 and168.

[192] American Law Institute, Restatement (Third) of Torts: Product Liability, above n.164 Reporters Notes to Comment d: IID (p.73).

[193] Ibid.

[194] Hammond, above n.89, at p 69, notes that the phrase ‘entitled to expect’ is ambiguous – it could mean that although persons generally actually know that a certain risk exists, they are entitled to expect that it should not.

[195] s75AC(4).

[196] 75AC(2)(a).

[197] See Pratt and Kuckelman above n21 at (1).

[198] See for example the facts of Demuth Development Corp v Merck & Co 432 F.Supp 990 (EDNY 1977) where injury resulted from the use of an encyclopaedia of chemicals which misstated their toxicity, which was subsequently used in the calibration of equipment.

[199] Hammond, above n.89, at p75.

[200] Hammond, above n.89, at p.75. Hammond argues that such responsibility may be contrary to both deterrence and proper pricing in circumstances where the person with the greatest capacity for minimising any misuse is the person misusing the product.(Id at 75) This does not address the situation likely in the context of medical expert systems where the injured party may not be aware of the system’s misuse – if indeed the disintermediated output is a ‘misuse’.

[201] s75AK(1)(a).

[202] See text accompanying note 180 ff.

[203] See text accompanying above n.184 and 185.

[204] Hammond, above n.89, at p.41 ff.

[205] Ibid, at p.59.

[206] Ibid, at p.60.

[207] Ibid, at p.60; Nguyen above n.12 at p.1212.

[208] Section 41MA TGA.

[209] Commonwealth of Australia, Australian Medical Device Guidelines: An Overview of the New Medical Devices Regulatory System: Guidance Document 1. 2002.

[210] Ibid.

[211] Ibid.

[212] Guidance Document, above n.09 at (9).

[213] Ibid.

[214] Essential Principle 1

[215] Essential principle 2

[216] Essential Principle 6

[217] The ‘benefits to be gained from the use of a medical device for the performance intended by the manufacturer must outweigh any undesirable side effects arising from its use’.

[218] TGA s.41BH(2)


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/JlLawInfoSci/2004/2.html