University of New South Wales Faculty of Law Research Series
MEIRING de VILLIERS
John Landerer Faculty Fellow
University of New South Wales, School of Law
Sydney, NSW 2052
This paper will appear in the Vanderbilt Entertainment and Technology Law Journal (forthcoming).
Information security is an important and urgent priority in computer systems of corporations, government, and private users. The confidentiality, integrity, and availability of digital information is constantly threatened by malevolent software, such as computer viruses and worms. Virus detection software announces the presence of a virus in a program by issuing a virus alert. A virus alert presents two conflicting legal issues. A virus alert, as a statement on an issue of great public concern, merits protection under the First Amendment. The reputational interest of a plaintiff disparaged by a virus alert, on the other hand, merits protection under the law of defamation. The United States Supreme Court has struck a balance by constitutionalizing the common law of defamation in a series of influential decisions. The article focuses on two implications of these decisions, namely (i) the plaintiff must show that the defamatory statement is objectively verifiable as true or false, and (ii) the plaintiff must prove its falsity with convincing clarity. The defendant may prove the truthfulness of the statement as a defense. The crucial issues in these implications are truth, falsity, and verifiability.
The article analyzes the balance between the conflicting legal rights associated with a virus alert. It focuses on the legal meanings of truth, falsity and verifiability of a virus alert, and resolution of these issues in the context of the technology involved in a virus alert. The analysis merges perspectives from constitutional law, the law of defamation and information technology. Insights from theoretical computer science demonstrate, for instance, that the truth of a virus alert may be unverifiable. In such a case the alert would receive full constitutional protection under the Supreme Court's First Amendment defamation jurisprudence.
Computer security is an important and urgent priority in the information networks of corporations, government, and increasingly, private users, especially in the aftermath of the terrorist attacks of September 11, 2001. The interconnectivity and interdependence of computers on the Internet have made users increasingly vulnerable to cyber attacks emanating from a variety of wrongdoers, such as cyber criminals, terrorist groups, and perhaps even rogue nation states.
The most powerful weapon available to cyber attackers is a type of computer code generically known as malevolent software. Malevolent software is designed to disrupt the operation of computer systems. The most common of these rogue programs are the computer virus, and its common variant, the so-called worm. Viruses can be programmed to access and steal confidential information, corrupt and delete electronic data, and to monopolize computational resources that should be available to legitimate users.
The escalation of virus attacks on the Internet has prompted the development of advanced virus detection and elimination technologies. Virus detection software issues an alert when it detects virus-like behavior or properties in a program. The leading anti-virus technologies are sophisticated and effective, but virus detection errors, known as false positives and false negatives, nevertheless do occur. A false positive is an indication that a virus has been found when in fact, there is none. A false negative is the converse, namely failure to detect a virus when it is actually present.
A virus alert tends to harm the reputation of a corporation whose software product has been tagged as viral. The maligned corporation may initiate a defamation action against the manufacturer of the virus detection software, as in the following case. In December 1992, a news story in the Wall Street Journal reported that a federal judge had ordered McAfee Associates, Inc., a producer of computer security software, to stop distribution of one of their products that falsely identified a virus in the software of a company, Imageline. Imageline sued McAfee Associates in U.S. District Court in Richmond, Virginia, alleging defamation, among other claims. The complaint alleged that the false positives scared customers away, hurting the company's reputation. McAfee declined comment other than stating that the suit is without merit.
This article analyzes the balance between two conflicting legal rights associated with a virus alert. A virus alert, as a statement on an issue of great public concern, merits protection under the First Amendment. The reputational interest of a plaintiff disparaged by a virus alert, on the other hand, merits protection under the law of defamation. The United States Supreme Court has struck a balance by constitutionalizing the common law of defamation with a series of decisions, starting in 1964 with New York Times Co. v. Sullivan. This article focuses on two implications of these decisions: (1) The plaintiff must show that the defamatory statement is objectively verifiable as true or false, and (2) The plaintiff must prove its falsity with convincing clarity. The defendant may prove the truthfulness of the statement as a defense. The crucial issues in these implications are truth, falsity, and verifiability, which are the focus of this article.
The analysis merges three perspectives, namely the Supreme Court's First Amendment defamation jurisprudence, the common law of defamation, and the technological environment in which a virus alert occurs. Analysis of the defamatory implication of a virus alert shows that a virus alert is substantially true if and only if the object identified as viral is capable of reproducing by executing an infection module. Conversely, a virus alert is false if the object either does not have an infection module, or if the infection module cannot execute, perhaps due to a programming or logical error. This result provides a rigorous and logical definition of the truthfulness of a virus alert as a defamatory statment. It also provides the forensic basis for proof of truthfulness.
The article further demonstrates the resolution of the issues of truth, falsity and verifiability, based on forensic analysis of (i) the technology that issued a virus alert, and (ii) the digital properties of the viral object. The analysis highlights a striking entanglement of law and technology. Insights from theoretical computer science show, for instance, that the truth of a virus alert, as defined in the article, may be undecideable under certain conditions. When these conditions apply, the alert would receive full protection under the Supreme Court's First Amendment defamation jurisprudence.
The article is organized as follows. Section 2 discusses the elements of a defamation action. Section 3 reviews the evolution of the Supreme Court's First Amendment defamation jurisprudence. Section 4 discusses the principles of malevolent software. Section 5 analyzes the anatomy of a virus alert. Section 6 analyzes the truth, falsity and verifiability of a virus alert. Section 7 discusses and concludes.
The tort of defamation is an invasion of the interest of a person or corporation in a reputation and good name. A defamatory statement is a statement of fact about a person or business entity that tends to harm the plaintiff's reputation, respect or goodwill. It has, for instance, been held to be defamatory to say that a person is a credit risk, that a kosher meat dealer has sold bacon, and that a physician has advertised. Defamation law aims to protect the reputational interests of a plaintiff by allowing her to restore her good name, and to obtain compensation and redress for harm caused by defamatory statements. Courts have extended the protection of defamation law to the reputational interests of corporations. Although a corporation has no reputation in the personal sense of an individual, it has a reputation and standing in the business in which it operates. It can sue for defamatory statements related to matters affecting its business reputation and practices, such as financial soundness, management and efficiency.
The complexity of the tort of defamation is illustrated by the elements that have to be satisfied to establish a cause of action. One author has identified nine elements, while another lists twenty three, each crucial to a defamation action. The defamation plaintiff must plead and prove the following elements:
A person or corporation can be defamed by more than written or spoken words. Defamation may occur by means of a picture, gesture, a loaded question, or an insinuation. The defamatory imputation may be indirect. Signing the plaintiff's name to false or bad authorship, for instance, has been held to be defamatory. Successful defamation actions have been filed by plaintiffs who were defamed by comments published on the Internet. In Stratton Oakmont, Inc. v. Prodigy Services Co., for instance, a defamatory statement was posted on a bulletin board maintained by Prodigy, an Internet service provider ("ISP"). The statement claimed that the plaintiffs had committed fraud in connection with an initial public stock offering. The plaintiffs filed a defamation action, and prevailed in the Supreme Court of New York. Congress subsequently passed legislation exempting Internet service providers from liability for on-line defamation. In Blumenthal v. Drudge, the United States District Court for the District of Columbia denied recovery to a plaintiff in a defamation action against an ISP, under the safe harbor provision for ISPs. The court observed that, although the ISP is immune from liability, the original author of the defamatory statements could potentially be held liable.
The defamatory comments in Internet defamation cases were published as computer-generated words and images, downloaded and displayed on user terminals. A virus alert is communicated to a recipient in similar format, namely a computer-generated message alerting a computer user to the presence of a virus. The analogy suggests that courts will likely recognize that a plaintiff can be defamed by a virus alert.
The defendant in a defamation action involving a virus alert may argue that computer generated communication, such as a virus alert, merits protection under the First Amendment. This position has received support among academic commentators, and courts have in fact recognized protection for specific categories of computer-generated output, such as digital simulation of sexual activity by minors that does not rise to the level of obscenity.
Professor Dan Burk has argued in favor of First Amendment protection for computer-generated output, by pointing out the analogy between computer output and First Amendment protected music. The scope of First Amendment protection of music naturally extends to the musical output of a piano roll or compact disk. The output generated by computers, as the digital analogue of the output of a piano roll, whether in the form of text, graphics or sound, should therefore receive equivalent protection. Professor Burk explains the analogy by noting that piano rolls have sequences and patterns of punched holes, which "constitute a type of machine-readable program ... which express(es) music by tripping the mechanism of a player piano."
Courts have been reluctant to extend constitutional protection to computer-generated output that does not advance the ideals of the First Amendment. In Commodity Futures Trading Commission v. Vartuli et al., the court contemplated the First Amendment protection of computer generated trading commands. The computer system at issue required no independent intellectual effort from the user. The user was supposed to obey the computer's buy and sell signals literally and without question, for the system to work as marketed. The court stated that the purpose of the computer output was not to communicate information, but to prompt action without engaging the mind or will of the recipient. None of the ideals pursuant to which speech is normally accorded constitutional protection, such as the pursuit of truth, prevention of abuse of authority, and the functioning of a democracy, were relevant to this communication. The court concluded that the defendant who distributed this automatic trading system, did not engage in constitutionally protected speech.
We conclude that a virus alert as a statement on a significant public issue, merits First Amendment protection, but is also subject to the law of defamation.
Until 1964, defamation was outside the scope of First Amendment protection. Defamation law strongly favored the plaintiff, and the courts treated defamation virtually as a strict liability tort. The plaintiff had to merely allege falsity to establish a cause of action for defamation, while the defendant had to prove the truth. Liability and damages were presumed, and the plaintiff could recover without showing any actual harm. The defendant had several potential defenses, including truth, absolute and conditional privilege, and fair comment. In practice, however, these defenses were difficult to establish and pleading them sometimes exposed the defendant to further liability.
The constitutionalization of the common law of defamation, which started in 1964, dramatically reshaped the the plaintiff's position, especially with respect to the burden of proof of truth and falsity, fault, and the opinion privilege.
In a landmark decision in New York Times v. Sullivan, the Supreme Court redefined the contours of libel litigation and eroded much of the plaintiff's favored position. In New York Times, a public official of Alabama filed a defamation suit against the New York Times, on the basis of an advertisement in the newspaper that alleged police misconduct towards the civil rights movement. The trial court found for the plaintiff, and awarded damages in the amount of $500,000. The judgment was affirmed by the Supreme Court of Alabama.
The United States Supreme Court reversed, holding that in defamation actions brought by public officials, the Constitution requires the plaintiff to show by clear and convincing evidence that the statement at issue was published with actual malice, a standard which the plaintiff had not met. The Court defined "actual malice" as a statement with either knowledge of its falsity or reckless disregard for the truth. The New York Times Court further held that a plaintiff must prove actual malice by clear and convincing evidence, which is a stricter standard than the civil preponderance of the evidence, but less strict than the criminal standard of beyond a reasonable doubt. The rationale underlying New York Times was that the First Amendment should function "to assure unfettered interchange of ideas for the bringing about of political and social changes desired by the people." The Court concluded that imposing on the critic of official conduct the burden to prove the truth of his statement amounts to a form of self-censorship, contrary to the ideals of the First Amendment.
Although the Court did not specifically state that truth is an absolute defense in a defamation action against a public official, it is implied by the actual malice requirement. The logic of this conclusion is well articulated in Rinaldi v. Holt, Rinehart & Winston, Inc., where the court reasoned that placing the burden of proof of falsity on the plaintiff "follows naturally from the actual malice standard. Before knowing falsity or reckless disregard for truth can be established, the plaintiff must establish that the statement was, in fact, false." The Court's decision has also been interpreted by the Supreme Court, lower courts, and academic commentators as imposing on a public figure plaintiff the burden of proof of falsity.
In the same year that New York Times was decided, the Supreme Court considered the constitutionality of a Louisiana statute that allowed truth as a defense only for statements made "with good motives and for justifiable ends." This limitation on the truth defense appears to have been declared unconstitutional when the Court declared that "[t]ruth may not be the subject of either civil or criminal sanction where discussion of public affairs is concerned." Three years later, in Curtis Publishing Co. v. Butts, the Court extended the actual malice standard of New York Times to public figure defamation plaintiffs. Public figures are people who are in the public eye, but not public officials. The Court also held that a public figure plaintiff must prove malice by clear and convincing evidence.
In Gertz v. Robert Welch, Inc., the plaintiff, Elmer Gertz, filed a defamation suit against a magazine which had made several untrue statements about the plaintiff, including a charge that he was an official in a Communist organization which advocated the violent overthrow of the U.S. government. The plaintiff prevailed at trial and won a jury award. The trial court overturned the jury verdict, holding that the New York Times fault standard of actual malice applied to defamation actions involving matters of public concern, a standard which the plaintiff had not met. The Supreme Court disagreed, reasoning that, although the matter under litigation was one of public concern, Gertz was nevertheless a private figure, because he had not deliberately brought himself into the public eye. The Court found that a private figure's reputational interest merited greater protection than provided by the actual malice requirement. The Court concluded that the states may impose any standard of care other than strict liability in defamation actions involving private plaintiffs. A private individual must therefore prove that the defendant acted at least negligently.
In Time, Inc. v. Firestone, the Supreme Court indicated that truth would be a complete constitutional defense, in private as well as public figure cases. This position was consistent with the Restatement (Second) of Torts, as well as a number of lower court holdings. In Philadelphia Newspapers, Inc. v. Hepps, the Supreme Court confirmed the status of truth as a constitutional defense in private figure cases, at least where speech on matters of public interest is concerned, and a media defendant is involved.
The general consensus is that expression of opinion, as distinct from statement of fact, must be protected from liability under defamation law. Professor Robert Post comments, "opinions are in their nature debatable. To impose sanctions for 'false' opinions is to use the force of law to end this potential debate by imposing legally definitive interpretations of the cultural standards at issue." Although support for an opinion privilege is evidently strong, the legal distinction between actionable fact and protected opinion has proved to be difficult and elusive.
Historically, treatment of opinion in the law of defamation has gone through three stages: Common law "fair comment," largely prior to 1974, protection based on the Supreme Court's dictum in Gertz v. Robert Welch, Inc., between 1974 and 1990, and treatment based on the opinion in Milkovich v. Lorain Journal Co.
The privilege of "fair comment" was born of the courts' sensitivity to the dangers inherent in legal limitations on freedom of expression. The privilege was designed to insure robust and open debate on public issues. The defendant could rely on a fair comment privilege, provided (i) his statement was on a matter of public interest, (ii) the statement was true or privileged, (iii) the statement was the actual opinion of the speaker, and (iv) the statement was made in good faith. The privilege turned out to be inadequate and impractical, and its scope was uncertain. A prediction, for instance, as to whether a given statement merited protection depended on factors that varied among jurisdictions.
The doctrine of fair comment was eventually superseded when the Supreme Court, in Gertz v. Robert Welch, Inc., hinted that the traditional common law distinction between fact and opinion may also trigger First Amendment concerns. Justice Powell, writing for the majority, elaborated on the fact-opinion distinction:
"Under the First Amendment there is no such thing as a false idea. However pernicious an opinion may seem, we depend for its correction not on the conscience of judges and juries but on the competition of other ideas. But there is no constitutional value in false statements of facts. Neither the intentional lie or the careless error materially advances society's interest in uninhibited, robust, and wide-open debate on public issues."
Although technically a dictum, Justice Powell's statement rapidly assumed constitutional status in the judiciary. Subsequent Supreme Court opinions have mentioned the Gertz dictum with approval, and most state and federal courts have taken it to establish an absolute constitutional privilege for statements of opinion. A subsequent version of the Restatement of Torts stated that "[t]he common law rule that an expression of opinion of the ... pure type may be the basis of an action for defamation now appears to have been rendered unconstitutional by U.S. Supreme Court decisions."
The Gertz dictum did not provide any analytical means of distinguishing between an actionable assertion of fact and protected opinion, and post-Gertz courts struggled with the distinction. In an influential decision, the District of Columbia Circuit formulated a widely used and influential test in Ollman v. Evans. Writing for the court, then-Judge Kenneth Starr articulated four factors which distinguished fact from opinion, namely the ordinary meaning of the language used, verifiability of the statement, and its linguistic content and social context.
In Milkovich v. Lorain Journal Co., the Supreme Court revisited the opinion privilege. The plaintiff in Milkovich was a high school wrestling coach whose team had become involved in an altercation during a wrestling match. A hearing was conducted by the Ohio High School Athletic Association ("OHSAA") into the incident, during which Milkovich as well as H. Don Scott, the Superintendent of Maple Heights Public Schools, testified. Following the hearing, Milkovich was censured, and his team placed on probation and declared ineligible for the 1975 state tournament. The parents of several members of the wrestling team promptly sued OHSAA, claiming that they had been denied due process in the OHSAA proceedings. After a second hearing, in which Milkovich and Scott both again testified, the court overturned OHSAA's orders.
The next day, Theodore Diadiun, a sports columnist, wrote an article criticizing Milkovich's role in the altercation, as well as his testimony in the court proceeding. The heading for his column stated, "Maple beat the law with the 'big lie'". The column included a passage stating that the message for Maple Heights students is that "If you get in a jam, lie your way out." The column continued, asserting that "[a]nyone who attended the meet ... [knew] in his heart that Milkovich and Scott lied at the hearing after each having given his solemn oath to tell the truth." The tenor and language of the article clearly implied that Milkovich and Scott had perjured themselves, an indictable offense in the State of Ohio.
Milkovich and Scott both sued the journalist, as well as his newspaper, for defamation, claiming that the published article accused them of perjury. During the next fifteen years, the litigation wound its way through the Ohio courts. The odyssey ended for Scott when the Ohio Supreme Court, applying Ollman's four-factor analysis, held that Diadiun's column was constitutionally protected opinion. The Ohio Court of Appeals, in a separate action by Milkovich, concluded that it was bound by this precedent, and upheld a grant of summary judgment against Milkovich. The Ohio Supreme Court dismissed Milkovich's appeal for failing to raise a substantial constitutional issue, and Milkovich petitioned the U.S. Supreme Court.
The Supreme Court granted certiorari to consider constitutional issues raised by the Ohio courts. The specific issues before the Court were the argument that only statements of fact are actionable, and that the distinction between opinion and fact should be determined by the Ollman four factor test.
After summarizing the constitutional evolution of defamation law, the Court referred to its famous dictum in Gertz, which had been interpreted by numerous courts as providing First Amendment protection to any statement that could be labeled 'opinion.' The Court rejected this view, stating that such an interpretation would "ignore the fact that expressions of 'opinion' may often imply an assertion of objective fact." The opinion privilege should therefore not immunize speakers from liability by prepending the magic words, "in my opinion," to a statement. The Court illustrated with the comment, "In my opinion, Jones is a liar," which, although stated as opinion can nevertheless be just as damaging to Jones' reputation as the assertion, "Jones is a liar." The statement, "In my opinion, Jones is a liar," is actionable because it implies unstated defamatory facts underlying the author's statement.
The Court concluded that "the breathing space which freedoms of expression require in order to survive is adequately secured by existing constitutional doctrine without the creation of an artificial dichotomy between 'opinion' and fact." One such existing constitutional doctrine is the requirement that a plaintiff prove the falsity of a defamatory statement on a matter of public concern, articulated in Philadelphia Newspapers, Inc. v. Hepps. The Court reasoned that Hepps stands for the principle that "speech that does not contain a provably false factual connotation ... receives full First Amendment protection."
A second protective constitutional doctrine identified by the Court was that of a line of Supreme Court cases which protects 'loose, figurative, or hyperbolic' statements that cannot reasonably be understood as implying an assertion of objective fact about the plaintiff. The special status of these types of expression derives from the constitutional protection provided for parody and other imaginative commentary by decisions such as Hustler Magazine, Inc. v. Falwell, and Greenbelt Cooperative Publishing Association, Inc. v. Bresler, rather than any separate constitutional protection for opinion.
The Court rejected the dichotomy between fact and opinion, holding that the appropriate constitutional inquiry is not whether a statement constitutes fact or opinion, but whether it is capable of being proven true or false based on objective evidence. The Court concluded, "[A] statement of opinion relating to matters of public concern which does not contain a provably false factual connotation will receive full constitutional protection." To illustrate, the Court compared the statement, "In my opinion, Mayor Jones is a liar," with the statement, "In my opinion, Mayor Jones shows his abysmal ignorance by accepting the teachings of Marx and Lenin." The former implies a verifiable fact, and would thus be actionable. The latter statement would receive full constitutional protection.
The Milkovich Court thus established verifiability as the sole criterion that determines the constitutional protection of a statement on a matter of public concern. Furthermore, a statement is verifiable in the constitutional sense only if its truth or falsity is based upon objectively determined facts.
Having set the analytical stage, the Court turned to the facts of the case before it with a two-step analysis. First, the Court ascertained the defamatory implication of the article about Milkovich and Scott published by the magazine. The Court found that a reasonable factfinder could conclude that the article implied that petitioner Milkovich had perjured himself in a judicial proceeding. Second, the Court determined the verifiability of the defamatory implication. The Court decided that it was indeed verifiable, reasoning that "[a] determination of whether petitioner lied in this instance can be made on a core of objective evidence by comparing, inter alia, petitioner's testimony before the OHSAA board with his subsequent testimony before the trial court ... [Whether or not petitioner perjured himself] is certainly verifiable ... with evidence adduced from the transcripts and witnesses present at the hearing. Unlike a subjective assertion, the averred defamatory language is an articulation of an objectively verifiable event." The Court reversed and remanded the case. The Court declared that its decision struck an appropriate balance between the rights and guarantees of the First Amendment and the social values protected by the law of defamation.
Malevolent software is a term for computer code that is designed to disrupt the operation of a computer system. The most common of these rogue programs are the computer virus and its common variant, the worm. Other forms of malicious software include so-called logic bombs, Trojan horses, and trap doors.
The term "virus," Latin for "poison," was first formally defined by Dr. Fred Cohen in 1983, even though the concept originated in John von Neumann's studies of self-replicating mathematical automata in the 1940s. A computer virus can be described as a series of instructions (a program) that (i) infects a host program by attaching itself to the host, (ii) executes when the host is executed, and (iii) spreads by cloning itself, or part of itself, and attaching the clones to other host programs. In addition, many viruses have a so-called payload capable of harmful side-effects, such as deleting, stealing or modifying digital information. As the definition suggests, a typical computer virus consists of three basic modules or mechanisms, namely an infection module, payload trigger, and payload.
The infection module enables a virus to reproduce and attach copies of itself onto target hosts. This mechanism is the most salient technical property of a computer virus. The first task of the infection mechanism is to locate a prospective host program. Once a suitable host is found, the virus may take precautions, such as checking whether the host has already been infected. The virus then installs a copy of itself on the host. Once settled, the virus may take steps to protect itself from detection by changing its form. When the host program runs, control is passed to the resident virus code, allowing it to execute. The executing virus repeats the infection cycle by automatically replicating itself and copying the newly-created clones to other executable files on the system or network, and even across networks.
A virus may infect a computer or a network through several possible points of entry, including via an infected file downloaded from the Internet, through web browsing, through removable media such as writable compact disks and DVDs, via infected files in shared directories, via an infected e-mail attachment, or even through infected commercial shrinkwrapped software. Fast-spreading worms, such as CodeRed and Blaster, infect new hosts by exploiting network security vulnerabilities. Early viruses targeted the boot sectors of floppy disks, and this trend continued into the 1990s. Floppy disks are no longer widely used to share files, and viruses are increasingly transmitted through e-mail attachments. In a 1996 national survey, for instance, approximately 9 percent of respondents listed e-mail attachments as the means of infection of their most recent virus incident, while 71 percent put the blame on infected diskettes. In 2004, the corresponding numbers were 92 percent for e-mail attachments, and zero for diskettes.
E-mail is the most widely used medium of exchanging files and sharing information, but it has become a convenient and efficient vehicle for virus and worm propagation. Fast-spreading viruses, such as ExploreZip and Melissa, for instance, exploited automatic mailing programs to spread within and across networks. Melissa typically arrived in the e-mail inbox of its victim, disguised as an e-mail message with a Microsoft Word attachment. When the recipient opened the attachment, Melissa executed. First, it verified whether the recipient had the Microsoft Outlook e-mail program on its computer. If Outlook were present, Melissa would mail a copy of itself to the first fifty names in Outlook's address book, creating the appearance to the fifty new recipients that the user of the infected system had sent them a personal e-mail message. Melissa would then repeat the process with each of the fifty recipients of the infected e-mail message (provided they had Outlook), by automatically transmitting clones of itself to fifty more people. A Melissa attack frequently escalated and resulted in clogged e-mail servers and system crashes.
In addition to replicating and spreading, viruses are often programmed to perform specific harmful actions. The module that implements this functionality is known as the payload. A payload can be programmed to perform destructive operations such as corrupting, deleting and stealing information. A payload may also create a backdoor that allows unauthorized access to the infected machine. Some payload effects are immediately obvious, such as a system crash, while others are subtle, such as transposition of numbers and alteration of decimal places. Subtle effects tend to be dangerous because their presence may not be detected until substantial harm has been done. Payloads are often relatively harmless and do no more than entertain the user with a humorous message, musical tune, or graphical display.
The payload is triggered when a specific condition is satisfied. Triggering conditions come in a variety of forms, such as a specified number of infections, a certain date, or specific time. The Friday-the-13th virus, for instance, only activated its payload on dates with the cursed designation. More recently, the first CodeRed worm alternated between continuing its infection cycle, remaining dormant, and attacking the official White House Web page, depending on the day of the month. In the simplest case, a payload executes whenever the virus executes, without waiting for a trigger event. Viruses do not always have a payload module, but even viruses without a payload may harm their environment by consuming valuable computing resources.
A worm is a special type of virus. It is similar to a virus in most respects, except that it does not need to attach itself to a host program to replicate and spread. Like viruses, worms often carry destructive payloads, but even without a destructive payload a fast-spreading worm can do significant harm by slowing down a system through the network traffic it generates.
The original worm was implemented by scientists at Xerox PARC in 1978, but the so-called Morris Worm, created by Cornell University graduate student, Robert T. Morris, was the first to become a household name. The 1989 Morris worm used a security flaw in a UNIX program to invade and shut down much of the Internet. By some accounts, this event first woke the world up to the dangers of computer security vulnerabilities, such as the buffer overflow flaw that enabled the Morris worm to paralyze the Internet.
Courts resolve the truth or falsity of a defamatory statement by considering factors such as the context of the statement and the information on which it was based. In the case of a virus alert, context and information depends on the technology involved. A virus alert is generated by a computer program, and is based on an assessment of digital patterns in other programs. The truth or falsity of an alert depends on the properties of these technologies that generated the alert. This section discusses the most commonly used anti-virus technologies, and the mechanisms by which they generate a virus alert.
Technical anti-virus defenses come in four varieties, namely signature scanners, activity monitors, integrity checkers, and heuristic techniques. Scanners detect known viruses by indentifying patterns that are unique to each virus strain. Activity monitors look out for virus-like activity in a computer. Integrity checkers sound an alarm when detecting suspicious modifications to computer files. Heuristic techniques combine virus-specific scanning with generic detection, providing a significantly broadened range of detection.
Scanners are the most widely used anti-virus
defense. A scanner reads executable programs and searches for the presence of
known as "signatures." A virus signature consists of patterns of
hexadecimal digits embedded in the viral code that are unique to
These signatures are created by human experts at institutions such as IBM's High
Integrity Computing Laboratory, who scrutinize viral
code and extract sections
of code with unusual
selected byte patterns are collected in a signature database and used in
scanner detects a virus in a program by comparing the program to its database of
signatures, and announcing a match as a possible
An ideal virus signature would give neither false negatives nor false positives. In other words, it should always identify the virus when present and never trigger an alarm when it is not. Although this ideal is unachievable in practice, anti-virus researchers pursue optimal solutions within practical constraints. The IBM High Integrity Computing Laboratory, for instance, has developed an optimal statistical signature extraction technique that examines all sections of code in a virus, and selects the byte strings that optimize the tradeoff between false positives and negatives.
Scanners are easy to use, but they are limited to detecting known signatures. A scanner's signature database has to be continually updated as new viruses are discovered and their signatures catalogued, a burdensome requirement in an environment where new viruses appear daily. Modern antivirus vendors have attempted to ligten the burden on users by distributing signature updates directly to their customers via the Internet.
False negatives are rare when scanning for viruses with known signatures, but false positives may arise when a signature has been chosen imprudently. A scan string selected as a signature for a given virus strain may, for instance, also be present in benign objects. This pattern may then match code that is actually a harmless component of a legitimate program. Furthermore, a short and simple pattern will be found too often in innocent software, and produce many false positives. Viruses with longer and more complex patterns, on the other hand, will less often give a false positive, but at the expense of more false negatives. As the number of known viruses grows, the scanning process will inevitably slow down as a larger set of possibilities has to be evaluated.
Activity monitors are resident programs that monitor activities in a computer for behavior commonly associated with viruses. Suspicious activities include operations such as attempts by a program to delete information and mass mail copies of itself. When suspicious activity is detected, the monitor may simply halt execution and alert the user, or take definite action to neutralize the activity. Activity monitors, unlike scanners, do not need to know the signature of a virus to detect it. Its function is to recognize general suspicious behavior, not the precise identity of the culprit.
The greatest strength of activity monitors is their ability to detect unknown virus strains, but they also have significant weaknesses. They can only detect viruses that are actually executing, possibly after substantial harm has been done. A virus may, furthermore, execute before the monitor code does, and do harm before the monitor is able to activate and detect it. A virus may also be programmed to alter monitor code on machines that do not have protection against such modification.
A further weakness of activity monitors is the lack of unambiguous rules defining "suspicious" activity. This may result in false alarms when an activity monitor picks up legitimate activities which resemble virus-like behavior. Recurrent false alarms may ultimately lead users to ignore warnings from the monitor. False negatives may result when an activity monitor fails to recognize viral activity which does not fit the monitor's programmed definitions.
An integrity verifier applies the electronic equivalent of a tamper-proof seal to protected programs, and issues an alert when the seal has been broken, presumably by the intrusion of a virus.
An integrity verification program generates a code, known as a "checksum," for protected files. A checksum may, for instance, be an arithmetic calculation based on the total number of bytes in a file, the numerical value of the file size and its creation date. A checksum is periodically recomputed and compared to the original. When a virus infects a file, it usually modifies the contents, resulting in a change in the checksum. When the recomputed value does not match the original, the file is presumed to have been modified since the previous inspection, and a warning is issued.
The advantage of integrity checking is that it detects most instances of viral infection, as infection usually alters the target file. Its main drawback is that it tends to generate false alarms, as a file can change for "legitimate" reasons unrelated to virus infection. Integrity checking software therefore presents a high likelihood of a false positive, given the general difficulty of determining whether a program change is legitimate or due to a virus. Integrity checking works best on static files, such as system utilities, but it is, of course, an inappropriate technique for files that naturally change frequently, such as Word documents.
Intelligent analysis of file changes may reduce the incidence of false positives. A sophisticated integrity checker may, for instance, take into account the nature and location of a file change in determining whether it is viral. Most integrity checkers include the option to exclude certain files or directories from monitoring.
A fourth category of virus detectors uses heuristic detection methods. Heuristic rules solve complex problems "fairly well" and "fairly quickly," but less than perfectly. Virus detection is an example of a complex problem that is amenable to heuristic solution. It has been proven mathematically that it is impossible to write a virus detection program that is capable of consistent perfect detection. Heuristic virus detection methods accept such limitations and attempt to achieve a heuristic solution, namely a detection rate that is below the (unachievable) perfect rate, but represents an optimal tradeoff between detection accuracy, speed and computational expense.
Heuristics detect novel viruses by examining the structure and logic of executable code for evidence of virus-like behavior. Based on this examination, the program makes an assessment of the likelihood that the scrutinized program constitutes a virus, by tallying up a score. The heuristic scanner examines a file, assigns a weight to each virus-like feature it encounters, and calculates a score based on the weights. If a score exceeds a certain threshold, the scanner classifies the program as malicious code, and notifies the user. Instructions to send an e-mail message with an attachment to every listing in an address book, for instance, would add significantly to the score. Other high-scoring routines include capabilities to replicate, to hide from detection, and to execute some kind of payload.
A heuristic assessment is necessarily less than perfect and will inevitably provide false positives and negatives. A low scanner threshold will result in false alarms. A scanner with a threshold that is set too high, on the other hand, will fail to detect viruses that are malicious but that do not exactly match the unrealistically tight specifications, resulting in false negatives. As in the case of activity monitors, the term "suspicious" is ambiguous. Many legitimate programs, including even some anti-virus programs, perform operations that resemble virus-like behavior. Nevertheless, state-of-the-art heuristic scanners achieve a 70-80 percent success rate at detecting unknown viruses.
A heuristic scanner typically operates in two phases. The scanning algorithm first narrows the search by identifying the location most likely to contain a virus. It then analyzes the code from that location to determine its likely behavior upon execution. A static heuristic scanner compares the code from the "most likely" location to a database of byte sequences commonly associated with virus-like behavior. The algorithm then decides whether to classify the code as viral.
A dynamic heuristic scanner uses CPU emulation. It loads suspect code into a virtual computer, emulates its execution and monitors its behavior. Because it is only a virtual computer, virus-like behavior can be safely observed in what is essentially a laboratory setting, with no need to be concerned about real damage. Although dynamic heuristics can be time-consuming due to the relatively slow CPU emulation process, they are sometimes superior to static heuristics. This will be the case when the suspect code is obscure and not easily recognizable as viral in its static state, but clearly reveals its viral nature in a dynamic state.
A major advantage of heuristic scanning is its ability to detect viruses before they execute and cause harm. Other generic anti-virus technologies, such as behavior monitoring and integrity checking, can only detect and eliminate a virus based on suspicious behavior, usually after execution. Heuristic scanning is capable of detecting novel virus strains whose signatures have not yet been catalogued. Such strains cannot be detected by conventional scanners. Heuristic scanners are capable of detecting polymorphic viruses, a complex virus family which complicate detection by changing their signatures from infection to infection.
The explosive growth in new virus strains has made reliable detection and identification of individual strains very difficult and costly, making heuristics more important and increasingly prevalent. Commercial heuristic scanners include IBM's AntiVirus boot scanner and Symantec's Bloodhound technology.
The United States Supreme Court revolutionized its First Amendment defamation jurisprudence with decisions in New York Times Co. v. Sullivan, Gertz v. Robert Welch, Inc., Philadelphia Newspapers, Inc. v. Hepps, and Milkovich v. Lorain Journal Co. We focus on two implications from these decisions, namely :
The main issues in these implications are truth, falsity, and verifiability, which are the focus of this section.
The defamation plaintiff must plead and prove the falsity of the statement at issue. Absolute truth is a complete defense to a defamation charge, but a defendant does not have to prove the literal truth of the defamatory statement to prevail. An effective defense can rely on the substantial truth doctrine. The substantial truth doctrine states that "[t]ruth will protect the defendant from liability even if the precise literal truth of the defamatory statement cannot be established." Minor inaccuracies are immaterial, as long as the "gist" or "sting" of the statement is true, regardless of who has the burden of proof and what standard of proof applies. Justice (then Judge) Scalia provided the following illustrative example in Liberty Lobby v. Anderson. Suppose a newspaper reports that a person has committed thirty-five burglaries, while he has actually committed only thirty-four. Although the statement is factually incorrect, it would not be actionable. It is substantially true, because its gist can be justified, namely that the person is a habitual burglar.
The Supreme Court formulated the substantial truth test as whether the libel as published "would have had a different effect on the mind of the reader from that which the pleaded truth would have produced." Falsehoods that do not harm the plaintiff's reputation more than the full and accurate truth, are therefore not actionable. In Justice Scalia's illustration, for instance, it would make no difference to a listener whether a habitual burglar had committed thirty-four or thirty-five burglaries.
The plaintiff has considerable control over the focus of a court's substantial truth analysis. Under common law pleading rules, the plaintiff must allege the defamatory meaning of the defendant's statement, namely that aspect of the statement which harmed his reputation. If the court determines that the statement is not capable of bearing the meaning asserted by the plaintiff, it will dismiss the complaint. If, on the other hand, the court determines that the interpretation is reasonable, it will hand the issue to the jury. The jury determines whether the defamatory meaning of the statement was so understood by the recipient, either correctly, or mistakenly but reasonably. If the defendant's language is vague, the jury must determine whether the claimed meaning was in fact the recipient's interpretation. The jury must then decide the truth or falsity of the defamatory meaning.
The following case illustrates adjudication of the substantial truth issue. In Golden Bear Distributing Systems v. Chase Revel, Inc., a magazine printed an article on the activities of separate companies, all of them operating under the name "Golden Bear Distributing Systems," in different states. The article reported a lawsuit for investment fraud brought against Golden Bear of California, and described legal difficulties plagueing Golden Bear of Utah. The article also referred to the marketing strategy of Golden Bear of Texas, noting its similarity to strategies of the troubled Golden Bear franchises. The article did not state that the Texas franchise was guilty, or even accused, of any wrongdoing. However, when the article appeared, Golden Bear of Texas rapidly lost business and was forced into bankruptcy. Golden Bear of Texas sued the magazine successfully for libel and was awarded damages at trial. The judgment was affirmed on appeal.
The Fifth Circuit observed that all the individual statements in the magazine article concerning Golden Bear of California's legal problems and the reference to the marketing strategy of Golden Bear of Texas are literally true. However, the court accepted the plaintiff's pleading that the magazine article falsely implied that Golden Bear of Texas had engaged in misconduct.
The defendant argued that its article was substantially true, but the court did not agree. The factual allegations of the defendants, while literally true, did not justify the defamatory implication, namely that Golden Bear of Texas had engaged in misconduct similar to that of Golden Bear of California. The plaintiff succeeded in proving the falsity of the defamatory gist of defendant's communication, while the defendant's pleaded truth failed to justify it.
The aim of this subsection is to define and analyze the legal meaning of the concept "substantial truth of a virus alert." A virus alert literally states that "a program is infected with a specific type of malevolent code." Its truth as a technical statement is not controversial. Its truth as a defamatory statement and in a constitutional sense, however, is a legal concept which requires analysis of the defamatory meaning of the alert. The analysis concludes that a virus alert is substantially true if and only if the detected object is capable of executing an infection module. We reason as follows.
A software vendor's most vital intangible asset is its reputation for secure software, especially in the security-conscious aftermath of the terrorist attacks of September 11, 2001. A vendor of software, especially software destined for the networks of the national information infrastructure and other sensitive applications, is dependent on a reputation for security to survive and remain in business. The most significant threat to information security is the proliferation of computer viruses and worms on the Internet. A vendor's reputation for secure software may therefore be irreparably harmed by a virus alert that indicates the presence of such malevolent code in its software product.
Modern information security
has three basic components, namely (1) Confidentiality, (2) Integrity, and (3)
Confidentiality refers to the prevention of unauthorized access to sensitive
Integrity refers to the protection of digital data from unauthorized change,
such as corruption or
Availability refers to procedures and safeguards ensuring that authorized users
have access to information, when needed and in convenient
computer virus threatens all components of information security through its
capability to replicate and
infection module also serves to export and multiply the effect of a payload, if
the virus has a payload. Most viruses do not
have a payload, though, and a
payload is not essential to be classified as a
virus, or to
Viral code threatens confidentiality of information. A virus or worm can be programmed to access and steal confidential information on a system. The W32/Bugbear@mm family of viruses, for instance, were designed to exploit vulnerabilities in the Outlook e-mail program to gain access to machines, steal confidential information using a keylogging function, and interfere with antivirus software. It also created a backdoor for hackers to take over the machine and misappropriate passwords and confidential financial information. Some members of the Bugbear family specifically targeted financial institutions.
Viruses and worms often use spoofed e-mail and Web sites to deceive users
into disclosing confidential information, a technique known
as "phishing." The
W32/Mimail.I@mm worm, for instance, displayed dialogues, purportedly from
PayPal, requesting financial information
from unwitting users. The stolen
information would then be encrypted and transmitted to the
Malicious code threatens the integrity of information. Viral payloads can be programmed to delete, modify or corrupt information on infected computers. In January, 2003, a young Welshman, Simon Vallor, was sentenced to two years imprisonment for releasing fast-spreading viruses via e-mail that were designed to corrupt data on the hard drives of infected computers. Viruses often corrupt information by replicating and spreading alone, without the help of a payload. Leading anti-virus researcher, Peter Szor, writes, "Virus replication has many side-effects. This includes the possibility of accidental data loss when the machine crashes due to a bug in the virus code or accidental overwriting of a part of the disk with relevant data. Virus researchers call this kind of virus a no payload virus." The so-called Stone virus, for instance, was a "no payload virus" which destroyed data by causing machines to crash, merely by prolific replication and spreading.
Malicious code threatens availability of information. Fast-spreading viruses make infected systems unavailable to legitimate users by monopolizing valuable computational resources. A recent denial of service attack on the Port of Houston, for instance, made crucial navigating data on the port's Web service temporarily unavailable to shipping pilots and mooring companies, creating substantial collision and other risks. The Internet worm W32/CodeRed, and its successors were deployed to exploit a vulnerability in Microsoft's Internet Information Services (IIS) web servers to create a global denial of service effect on the Internet. The W32/Slammer worm overloaded Internet routers and slowed down networks worldwide, making it difficult to use e-mail. The paralyzing effect of Slammer on the Internet also caused ATM failures and interfered with elections. The Sasser worm scanned so agressively for new target computers that it caused networks to become congested and slow down. In Australia, Sasser disrupted Railcorp trains and brought down the computer system of a major Australian financial institution, Westpac Bank. In the UK, Sasser caused flight delays and brought down the computerized mapping systems of several coastguard stations.
In conclusion, the reproductive capability of a virus is the essence of its threat to information security and, indirectly, the plaintiff's reputation. Proof of a reproductive capability is therefore sufficient to prove the truth of the defamatory meaning, hence the substantial truthfulness, of a virus alert.
Proof of the truthfulness of a defamatory allegation must be as precise and specific as the allegation itself. An allegation that a plaintiff embezzled money cannot be justified by proving that the plaintiff breached a fiduciary duty, and a charge that a plaintiff committed a burglary cannot be justified by proving that the plaintiff committed a murder.
The common law position seems unduly strict. A defendant may, with apparent justification, argue that an allegation that the plaintiff had embezzled money does not harm the plaintiff's reputation substantially more than the exact truth, namely that the plaintiff had breached a fiduciary duty. However, Professor Smolla explains that "[t]he relative strictness of the common law position on substantial truth when specific defamatory charges are made can be justified on the grounds that more detailed charges of misconduct often tend to create greater reputational injury because the existence of detail tends to lend credibility to the accusation." A defamer who uses specificity to strengthen the credibility of his story, must pay the price, namely be required to prove the truth with evidence as precise as the allegation itself.
Courts distinguish between inaccuracies where the allegation differs factually from the truth, and inaccuracies where the allegation is factually similar to the truth, but errs in insubstantial details. A statement would be substantially true if it is factually similar to the proven truth and differs from the truth by no more than insubstantial details. An allegation that a plaintiff embezzled money is substantially false if, in fact, the plaintiff only breached a fiduciary duty. Justice Scalia provided an example of a newspaper report that a person committed thirty-five burglaries, while he actually committed only thirty-four. The report is substantially true, because it differs from the factual truth only in an insubstantial detail.
The Supreme Court has analyzed the substance of a communication by looking at the mental impact of the communication on the average recipient. The test is whether the allegation "would have had a different effect on the mind of the reader from that which the pleaded truth would have produced." We now analyze the doctrines governing substantial truth in the context of a virus alert.
Consider a virus alert based on a type of malevolent code without an executable infection module, such as a logic bomb. At a high level of abstraction, a virus alert based on a logic bomb makes a factually correct statement, namely that the detected object constitutes malevolent code. The virus alert is therefore factually similar to the proven truth, but the specificity of the alert communicates additional information. When an alert identifies an object as a virus, it implies that the object not only constitutes malevolent code, but that it also contains an executable infection module. If the presence of an executable infection module in malevolent code is material to the average computer user, the virus alert would be substantially false if it identifies an object without such a module as viral.
The harm threatened by malevolent code without a reproductive capability, such as a logic bomb, is limited by its inability to spread beyond the system where it was planted. A virus, on the other hand, threatens the confidentiality, integrity, and availability of information far beyond its origin. Dr. Fred Cohen provides a dramatic illustration: "Sitting at my Unix-based computer in Hudson, Ohio, I could launch a virus and reasonably expect it to spread through 40% of the Unix-based computers in the world in a matter of days. That's dramatically different from what we were dealing with before viruses." Dr. Cohen's statement was published more than a decade ago. Today, viruses spread much faster, and there is every indication that virus transmission will continue to accelerate. The 2003 ICSA report remarks, for instance, that whereas it took the early file viruses months to years to spread widely, subsequent macro viruses took weeks to months, mass mailers took days, Code Red took approximately 12 hours, and Klez spread around the world in 2.5 hours. Whereas code such as a logic bomb can destroy data worth, say an amount D, releasing a virus to do the same job can cause this harm several times over by spreading into N systems, causing damage of magnitude NxD, where N can be very large. This distinction is clearly material to computer users.
A virus alert therefore requires proof of a reproductive capability to be substantially true. Proof that an object is a logic bomb does not justify calling it a virus, just as proof that a plaintiff committed a single homocide does not justify calling him a mass murderer. Identifying an object without a reproductive capability as a virus materially mischaracterizes it and significantly misstates the nature as well as the degree of harm it is capable of. Furthermore, the specificity of a virus alert, as a warning that implies a risk that could escalate into an electronic tsunami as opposed to a localized threat, strengthens the credibility and impact of the communication. Under common law evidentiary standards and the Supreme Court's mental impact test, proof of the truth of a virus alert should therefore include proof of an executable infection module.
The analysis in this section has provided two major conclusions. (1) Proof of a reproductive capability is sufficient to prove the truth of the defamatory meaning, hence the substantial truthfulness, of a virus alert. (2) A virus alert is true only if the detected object contains an executable infection module. Proof of a reproductive capability is therefore necessary to prove the substantial truthfulness of a virus alert.
A virus alert is therefore substantially true if, and only if, the detected object contains an executable infection module. Put differently, the presence of an executable infection module is necessary and sufficient for a virus alert to be substantially true. A plaintiff may prove the falsity of a virus alert by demonstrating that the detected object either does not have an infection module, or, that it has an infection module that cannot execute, perhaps due to a programming or logical error in the module's code.
Courts resolve the truth or falsity of a defamatory statement by considering factors such as the context of the statement and the information on which it was based. In the case of a virus alert, context and information are creatures of technology. A virus alert is generated by a computer program, and is based on an assessment of digital patterns in other programs. The truth or falsity of an alert depends on the properties of the technologies that generated the alert.
We have established that a virus alert is substantially true only if the identified code is capable of executing an infection module. This capability can often be conclusively verified by analyzing the code. Analysis of the logic of the host program, the viral code, and its infection module, as well as the absence of programming and logical errors, may reveal that the host will be run, control passed to the viral code, and the infection module triggered. In such a technically complex but uncontroversial case, the virus alert would be provably true, and the defendant should prevail on a truth defense.
A more complicated situation arises when the detected virus cannot execute on the system in which it was found, even though it could execute on another system. Such viruses are known as "latent" or "dormant." A PC-specific program infected by a PC-specific file virus, for instance, can normally not execute on a UNIX server or a Macintosh. It may nevertheless be found in these "foreign" environments, perhaps in an FTP directory or as part of an e-mail attachment. The dormant virus may later "wake up" when it is transferred to a system on which it could execute, perhaps by e-mail or through file-sharing. This kind of transmission is known as heterogeneous virus transmission.
The truth or falsity of a virus alert based on a dormant virus may be controversial. If heterogeneous transmission of the dormant virus were possible, its infection module would in principle be executable, and the virus alert would be substantially true. The state, Transfer to an environment where it can execute, can be interpreted as a triggering condition that has to be satisfied to execute the infection module. If this trigger can be satisfied, the alert would be substantially true. The dormant virus cannot do harm in its current environment, but it is nevertheless a security threat. By analogy, a firearm may justifiably be described as "dangerous", even if currently in possession of a responsible person, if it could easily fall into unsafe hands.
A dormant virus should be distinguished from a virus that cannot execute because of a logical or programming defect in its code. Neither the dormant virus of the previous example, nor the defective "virus" can execute in their current states, but both can be transformed into a state where they can execute. The dormant virus can be transferred to a new environment, and the defective virus can be debugged. If an alert based on the dormant virus is substantially true, the superficial similarity of their situations may seem to suggest that an alert based on the defective virus must also be substantially true. However, it ignores the fact that the truth of an alert must be evaluated with respect to the object on which the alert was based, and as of the time when the alert was communicated. In the dormant case, the virus on which the alert was based is identical to the virus that can execute in the new environment. The virus alert is therefore correct when it identifies the dormant object as viral. In the case of the defective virus, in contrast, the virus on which the alert was based is not the same as the virus which could eventually execute. Furthermore, at the time the alert was communicated, the object of the communication was not executable and thus not a virus. The executable virus is a corrected version of the defective object on which the alert was based. A virus alert based on the original defective virus would therefore be false.
False positives are comparatively rare in virus scanners, but they may occur if a virus signature is not well chosen. A signature that also occurs in legitimate code, for instance, may cause a scanner to misdiagnose a program as infected. A type of false positive, known as a "ghost positive," is generated when remnants of a virus are incorrectly detected and reported as viral. Ghost positives may occur when an antivirus program attempts to remove a virus from an infected file but leaves part of the virus code intact. The remnant code may contain a virus signature, even though the disembodied remnant cannot execute. Another anti-virus program may subsequently scan the file, detect the remnant, and report it as viral. Ghost positives also occur when a computer user installs two or more scanners simultaneously on the same computer. One of the scanners may fail to encrypt its virus signatures, and store them in plain text, exactly in the format in which they would appear in an infected file. The other scanner may then identify these signatures as a viral presence in the computer. The detected objects cannot execute because they are inactive disembodied signatures. A virus alert based on a ghost positive would therefore be substantially false.
Generic virus detectors issue alerts when they detect viral evidence in a program. Activity monitors and heuristic detectors monitor a network for suspicious activities, while integrity checkers look out for unauthorized changes to files. These detectors frequently issue false alerts, because their decision rules tend to be ambiguous. A file can change for "legitimate" reasons unrelated to virus infection, resulting in a false alert by an integrity checker. A heuristic detector can set its detection threshold too low and allow innocent code to trigger an alert. An alert generated in this way would, likewise, be false.
In conclusion, the truth/falsity issue of a virus alert must be resolved in the context of the technology that generated it, namely the detection technology that issued the alert, and the digital properties of the detected object.
The previous section discussed forensic verification of the truth of a virus alert. In the illustrative cases, verification was determinate: The virus alert was demonstrably either true or false. This will not always be the case. The computational logic of malicious code and the mathematical algorithm controlling its operation may be such that (i) a virus detector may classify code as viral due to the presence of, perhaps, a virus signature, yet (ii) forensic analysis of the code may show that executability of the infection module is indeterminate. As a result, truth or falsity of the alert is also indeterminate. The following stylized program illustrates this phenomenon and its legal implications.
Consider a virus which has inserted its code at the end of a host program, as shown in the illustration, below. This is a so-called "appending virus." After appending itself to the host, the virus code inserts an algorithm at the beginning of the host. The purpose of the algorithm is to transfer control to the virus when a prespecified mathematical condition is satisfied. Another jump routine at the end of the viral code returns control to the host program.
When the computer attempts to execute the host, the algorithm runs first. The algorithm generates a random even number greater than 2, and tests whether it can be written as the sum of two prime numbers. If it cannot be written as the sum of two primes, control is passed to the virus. The algorithm also passes the even number it has generated to the virus code. The virus code verifies that the even number cannot be written as the sum of two primes, and then executes, replicates, and attaches copies of itself to other host files. It also executes a payload which deletes the host computer's hard disk.
If, on the other hand, the algorithm is satisfied that the even number it has generated can be written as the sum of two primes, it does not pass control to the virus code, but removes the virus from the host program, passes control back to the host program, and modifies itself to directly transfer control to the host from then on. In other words, the virus is effectively removed from the host.
ILLUSTRATION: AN UNDECIDEABLE VIRUS
Suppose this virus is detected before the host is invoked, the algorithm executed, or the virus triggered. It is detected, perhaps by a signature scanner, which observed the signature embedded in its main body, or by a heuristic scanner, which recognized virus-like behavior inherent in the code, such as evidence of an infection module. The vendor of the allegedly infected software believes that its quality control and business practices have been called into question by the virus alert, and sues the vendor of the antivirus software for defamation. The plaintiff needs to prove the falsity of the alert, namely that the detected object cannot execute its infection module. The defendant will prevail if she can prove the substantial truth of the alert.
The virus alert would be substantially true if the algorithm could generate a number capable of triggering the infection module. This would be the case only if there exists such a number, namely an even number greater than 2 that cannot be written as the sum of two primes. Conversely, the virus alert would be false if there does not exist such a number.
The truth or falsity of the virus alert depends on a mathematical conjecture, known as Goldbach's Conjecture. The Conjecture states that every even number greater than 2 can be written as the sum of two primes. The even number 36, for instance, can be written as 17 + 19. If Goldbach is correct, then the algorithm cannot possibly generate the kind of number that would allow execution of the viral code and its infection module. In this case, the gist of the virus alert would be provably false, and the falsity issue decided in favor of the plaintiff.
Suppose, on the other hand, that Goldbach is incorrect. This means that there must exist at least one even number that cannot be written as the sum of two primes. If the algorithm fortuitously generated this number, its logic would transfer control to the virus, which would then execute, replicate, spread, and fire its payload. Although this number (and others that violate Goldbach, if they exist) may not be generated by the one time run of the algorithm, the number(s) could theoretically be generated. The gist of the virus alert would be true in this case, and the defendant would have a constitutional defense.
The truth or falsity of the alert appears to be verifiable with mathematical precision. If Goldbach were correct, the gist of the virus alert would be false, and if Goldbach were incorrect, the gist would be true. However, Goldbach's conjecture is an unresolved problem in Mathematics. No one has (at the time of writing) proven its truth or falsity. The truth of the virus alert is therefore undecideable. Under the constitutional standard articulated by the Supreme Court in Milkovich v. Lorain Journal Co., statements that are "not objectively verifiable, or that do not contain a provably false connotation, are entitled to full First Amendment protection." The virus alert in this example is therefore First Amendment protected speech, and a defamation claim based on it should be resolved in favor of the defendant. Prior to 1964 and the constitutionalization of defamation law, when the defendant had the burden of proving truth, the plaintiff would have prevailed on this issue.
This article has analyzed the balance between two conflicting legal rights associated with a virus alert, namely the rights and guarantees of the First Amendment and the social values protected by the law of defamation. Perhaps the most striking insight of the analysis is the role of technology in shaping the contours of this balance. Although the article focuses on false positives issued by virus detectors, the analysis can be adapted to false positives in other contexts, such as wrongful mammogram, HIV, sobriety, polygraph, drug, or paternity tests. Similar constitutional, reputational, and technological issues would likely play a key role in these contexts. Plaintiffs have litigated false positives in drug tests and medical diagnoses, claiming negligence, defamation, and emotional distress, but none of the reported defamation cases have considered the truth, falsity, and verifiability issues analyzed in this article.
 See, e.g., Richard D. Pethia, Cyber Security - Growing Risk from Growing Vulnerability, 25 June 2003, Testimony before the House Select Committee on Homeland Security [Stating that "as critical infrastructure operators strive to improve their efficiency and lower costs, they are connecting formerly isolated systems to the Internet to facilitate remote maintenance functions and improve coordination across distributed systems. Operations of the critical infrastructures are becoming increasingly dependent on the Internet and are vulnerable to Internet based attacks."] See also Dorothy E. Denning, INFORMATION WARFARE AND SECURITY (Addison Wesley, 1999), at 17 ["Through increased automation and connectivity, the critical infrastructures of a country become increasingly interdependent. Computers and telecommunications systems, for example, support energy distribution, emergency services, transportation, and financial services."]
 A computer virus can be described as a program that (i) infects a host program by attaching itself to the host, (ii) executes when the host is executed, and (iii) spreads by cloning itself and attaching the clones to other host programs. Viruses often also have a so-called "payload" capable of harmful side-effects, such as deleting, stealing or modifying information. See FREDERICK B. COHEN, A SHORT COURSE ON COMPUTER VIRUSES (Wiley, 1994, 2d ed.), at 1-2; DOROTHY E. DENNING and PETER J. DENNING, INTERNET BESIEGED (ACM Press, New York, 1998), at 75. A worm is similar to a virus in most respects, except that it does not need to attach itself to a host program to replicate and spread. Like viruses, worms may carry destructive payloads. See generally, John F. Schoch and Jon A. Hupp, The "Worm" Programs - Early Experience with a Distributed Computation, COMM. ACM, Vol 25, No 3, March 1982, 172.
 Junda Woo, False Alarms Over a Virus, WALL STREET JOURNAL, December 29, 1992, Section B, Page 6, Column 2. See, also, John Burgess, Viruses: An Overblown Epidemic?; Suit Against a Calif. Firm Highlights Computer Industry Debate, THE WASHINGTON POST, Page F1, December 30, 1992. See, also, Jeffrey O. Kephart et al., Blueprint for a Computer Immune System, IBM Thomas J. Watson Research Center Report, at 11.
 New York Times v. Sullivan,  USSC 40; 376 U.S. 254 (1964).
 RESTATEMENT (SECOND) OF TORTS, § 559 (1977). Defamation is the broader term for libel and slander. Libel is concerned with written or printed words, or more generally, embodiment of the defamatory message in tangible or permanent form. Slander constitutes oral defamation.
 Jessica R. Friedman, Defamation, 64 FORDHAM L. REV. 794 (1995); Restatement (Second) of Torts § 558, 559 (1977) ["A communication is defamatory if it tends so to harm the reputation of another as to lower him in the estimation of the community or to deter third persons from associating or dealing with him."]. The Code of the Australian state of Queensland defines "defamation" as "Any imputation concerning any person, or any member of his family, whether living or dead, by which the reputation of that person is likely to be injured, or by which he is likely to be injured in his profession or trade, or by which other persons are likely to be induced to shun or avoid or ridicule or despise him." Queensland Code, § 366.
 Neaton v. Lewis Apparel Stores, 1944, 267 App. Div. 728, 48 N.Y.S.2d 492.
 Braun v. Armour & Co., 1939, N.Y. 514, 173 N.E. 845.
 Gershwin v. Ethical Publishing Co., 1937, 166 Misc. 39, 1 N.Y.S.2d 904.
 Milkovich v. Lorain Journal Co. et al. USSC 117; , 497 U.S. 1, 12 ["Defamation law developed not only as a means of allowing an individual to vindicate his good name, but also for the purpose of obtaining redress for harm caused by such statements."]
 See PROSSER AND KEETON ON THE LAW OF TORTS, §111, at 779; Restatement (Second) of Torts § 561 and comment b (1977); Brown & Williamson Tobacco Corp. v. Jacobson,  USCA7 1018; 827 F.2d 1119 (7th Cir. 1987), cert. denied, 485 U.S. 993 (1988).
 See Golden Palace, Inc. v. NBC, 386 F.Supp. 107, 109 (D.D.C. 1974); Di Giorgio Fruit Corp. v. AFL-CIO, 215 Cal. App. 2d 560, 570-72, 30 Cal. Rptr. 350, 355-56 (1963); Reporters' Association of America v. Sun Printing & Publishing Ass ociation, 1906, 186 N.Y. 437, 79 N.E. 710.
 See Di Giorgio Fruit Fruit Corp. v. American Federation of Labor, etc., 1963, 215 Cal.App.2d 560, 30 Cal.Rptr. 350.
 See Diplomat Elec., Inc. v. Westinghouse Elec. Supply Co.,  USCA5 449; 378 F.2d 377, 382-83 (5th Cir. 1967); Aetna Life Ins. Co. v. Mutual Benefit Health & Accident Ass'n., 82 F.2d 115, 120 (8th Cir. 1936); Maytag Co. v. Meadows Mfg. Co., 45 F.2d 299, 302 (7th Cir 1930); Di Giorgio Fruit Corp. v. AFL-CIO, 215 Cal. App. 2d 560, 570-72, 30 Cal. Rptr. 350, 355-56 (1963); 3 RESTATEMENT OF TORTS § 561 (1976); Milo Geyelin, Corporate Mudslinging Gets Expensive, WALL ST. J., Aug. 4, 1989, at B1. The law of injurious falsehoods is concerned with false statements that harm economic interests but not harm to the corporate reputation. PROSSER AND KEETON ON THE LAW OF TORTS (5th ed., West Publ. Co., 1984), § 128, at 962-63.
 Nelon, Media Defamation in Oklahoma: A Modest Proposal and New Perspectives - Part I, 34 OKLA. L. REV. 478 (1981).
 Keeton, Defamation and Freedom of the Press, 54 TEX. L. REV. 1221, 1233-35 (1976).
 See R. Smolla, THE LAW OF DEFAMATION (2d ed.), § 1:34; Jessica R. Friedman, Defamation, 64 FORDHAM L. REV. 794, 794 (1995).
 "Public plaintiff" includes a public official, New York Times v. Sullivan,  USSC 40; 376 U.S. 254 (1964); a public figure, Curtis Publishing Co. v. Butts; Associated Press v. Walker,  USSC 200; 388 U.S. 130 (1967); and a limited purpose public figure. Limited purpose public figures are people who "have thrust themselves to the forefront of particular public controversies in order to influence the resolution of the issues involved." Gertz v. Robert Welch, Inc.,  USSC 144; 418 U.S. 323, 345 (1974). The "public official" category is fairly wide, and includes, for instance, government employees. See 1 Slade R. Metcalf & Leonard M. Niehoff, Rights and Responsibilities of Publishers, Broadcasters and Reporters 1.50, at 1-109.
 New York Times v. Sullivan,  USSC 40; 376 U.S. 254, 279-80 (1964).
 See Matherson v. Marcello, 473 N.Y.S. 2d 998, 1000 (N.Y. App. Div. 1984).
 Presumed damages may be allowed, even if special damages cannot be proven, provided the defamation falls into a "per se" category. A statement that the plaintiff had committed a crime, for instance, would be defamation per se. See Restatement (Second) of Torts 570 (1977).
 W. Keeton et al., PROSSER & KEETON ON THE LAW OF TORTS, § 115, at 845 (5th ed. 1984).
 W. Keeton et al., PROSSER & KEETON ON THE LAW OF TORTS, § 115, at 824-32 (5th ed. 1984).
 The Restatement supports a broad interpretation of what may constitute defamatory speech. See Restatement (Second) of Torts § 565, Comment b at 170 ["To be defamatory under the rule stated in this Section, it is not necessary that the accusation or other statement be by words. It is enough that the communication is reasonably capable of being understood as charging something defamatory."] See also English Defamation Act 1996, s 17(1) [Stating that a defamatory statement means "words, pictures, visual images, gestures or any other method signifying meaning."]
 See, e.g., Locke v. Benton & Bowles, 1937, 165 Misc. 631, 1 N.Y.S.2d 240, reversed 253 A.D. 369, 2 N.Y.S.2d 150; Ben-Oliel v. Press Publishing Co., 1929, 251 N.Y. 250, 167 N.E. 432.
 See, e.g., Sperry Rand Corp. v. Hill, 1st Cir.1966, 356 F.2d 181, certiorari denied 384 U.S. 973, 86 S.Ct. 1859, 16 L.Ed.2d 683; Carroll v. Paramount Pictures, S.D.N.Y. 1943, 3 F.R.D. 47.
  USCA9 1630; 63 U.S.L.W. 2765 (N.Y. Sup. 1995).
 Communications Decency Act ("CDA") of 1996, 47 U.S.C. § 230 (1998 Supp.)
 992 F.Supp. 44 (D.D.C. 1998).
 992 F.Supp. 44, 51 (D.D.C. 1998).
 Academic commentators have argued that computer output is an expression of functions and operations performed by a computer, analogous to spoken and written expressions of the human mind, and thus within the scope of First Amendment protection. See Freed, Products Liability in the Computer Age, 12 FORUM 461 (1977); Walker, The Expanding Applicability of Strict Liability Principles: How is a "Product" Defined?, 22 TORT & INS. L. J. 1, at 12-15.
 See Norman T. Deutsch, Professor Nimmer Meets Professor Schauer (and Others): An Analysis of "Definitional Balancing" as a Methodology for Determining the "Visible Boundaries of the First Amendment", 39 AKRON L. REV. 483, 524 ["[T]he distribution of descriptions of or other depictions of sexual conduct [by minors], not otherwise obscene, which do not involve live performances or other visual reproductions of live performances, retains First Amendment protection. This includes computer generated images."], citing New York v. Ferber,  USSC 169; 458 U.S. 747, 764-65 (1982), and Ashcroft v. Free Speech Coalition,  USSC 1379; 535 U.S. 234 (2002) [Invalidating federal statute that banned computer-generated child pornography.] (Emphasis added.); Norman Andrew Crain, Commentary: Bernstein, Karn and Junger: Constitutional Challenges to Cryptographic Regulations, 50 ALA. L. REV. 869, 887 ["[E]xpression does not lose First Amendment protection just because it interacts with a machine or ... a computer."] However, obscene works, including computer-generated images involving obscenity, are not First Amendment protected. See Ashcroft v. Free Speech Coalition,  USSC 1379; 535 U.S. 234, 240 (2002).
 See Ward v. Rock Against Racism,  USSC 161; 491 U.S. 781, 790 ["Music, as a form of expression and communication, is protected under the First Amendment."]
 Dan L. Burk, Patenting Speech, 79 TEX. L. REV. 99, 115.
  USCA2 374; 228 F.3d 94, 111, 112.
 Id., at 111. The court issued a caveat: "Statements in the form of orders or instructions are strikingly common ... We do not think and do not mean to suggest by our holding today that such communications can claim talismanic immunity from constitutional limitations. ... Any assertion that a statement like or unlike the 'buy' or 'sell' instructions issued by a ... computer is not fully protected by the Constitution should be subjected to careful and particularized analysis to insure that no speech entitled to First Amendment protection fails to receive it." Id., at 112.
 See, e.g., Lewis v. Hayes, 177 Cal. 587, 590, 171 P. 293, 294 (1918).
 A defendant could invoke a fair comment privilege by proving that "(i) the statement concerned a matter of legitimate public interest, (ii) the facts upon which the statement was based were either stated or known to the reader, (iii) the statement was the actual opinion of the defendant, and (iv) the statement was not motivated solely by the purpose of causing harm to the plaintiff." Rodney W. Ott, Fact and Opinion in Defamation: Recognizing the Formative Power of Context, 58 FORDHAM L. REV. 761, 763.
 Franklin & Bussel, The Plaintiff's Burden in Defamation: Awareness and Falsity, 25 WM. & MARY L. REV. 825 (1984), n. 6 [Noting that, "[i]n addition to the difficulty with regard to proof, an assertion in the pleadings that the statement was true may expose the defendant to further liability. If he should fail to prevail on that issue, the court may consider the pleading to be a republication of the libel."]
  USSC 40; 376 U.S. 254 (1964).
 272 Ala. 656, 144 So. 2d 25, 52 (1962), rev'd USSC 40; , 376 U.S. 254 (1964).
 New York Times USSC 40; , 376 U.S. 254, 279-80 (1964).
 376 U.S., 279-80. In a subsequent opinion, the Court described "reckless disregard" for the truth as entertaining serious doubts about the truth of the statement before making it. St. Amant v. Thompson,  USSC 79; 390 U.S. 727, 731 (1968).
  USSC 40; 376 U.S. 254, at 269, 70.
  USSC 40; 376 U.S. 254, at 279.
 See R. Smolla, THE LAW OF DEFAMATION (2d ed.), § 5:4.
 42 N.Y.2d 369, 380, 366 N.E.2d 1299, 1306, 397 N.Y.S.2d 943, 950 (1977).
 See Garrison v. Louisiana,  USSC 217; 379 U.S. 64, 74 (1974) ["We held in New York Times that a public official might be allowed the civil remedy only if he established that the utterance was false or in reckless disregard of whether it was false or true."]; Herbert v. Lando,  USSC 71; 441 U.S. 153, 176 (1979); Cox Broadcasting Corp. v. Cohn,  USSC 44; 420 U.S. 469, 490 (1975). The New York Times Court itself clearly stated that true speech can never be the basis of liability. New York Times, 376 U.S., at 271 ["Authoritative interpretations of the First Amendment guranatees have consistently refused to recognize an execption for any test of truth ... and especially one that puts the burden of proving truth on the speaker."]
 See, e.g., Goldwater v. Ginzberg,  USCA2 458; 414 F.2d 324, 338 (2d Cir. 1969); Beckham v. Sun News, 289 S.C. 28, 30, 344 S.E.2d 603, 604 (1986).
 See Franklin & Bussel, The Plaintiff's Burden in Defamation: Awareness and Falsity, 25 WM. & MARY L. REV. 825, 851-54 (1984); Kathryn Dix Sowle, Defamation and the First Amendment: The Case for a Constitutional Privilege of Fair Report, 54 N.Y.U. L. REV. 469, 488 (1979); Linda Kalm, Note: The Burden of Proving Truth or Falsity in Defamation: Setting a Standard for Cases Involving Nonmedia Defendants, 62 N.Y.U.L. REV. 812 (October, 1987.)
 Garrison v. Louisiana,  USSC 217; 379 U.S. 64 (1964).
  USSC 217; 379 U.S. 64, 74 (1964).
  USSC 200; 388 U.S. 130, 18 L. Ed. 2d 1094, 87 S.Ct. 1975 (1967).
 Butts USSC 200; , 388 U.S. 130, 155 (1967).
 Butts USSC 200; , 388 U.S. 130, 164.
  USSC 144; 418 U.S. 323 (1974).
 Gertz, 418 U.S., at 343-46.
 Gertz USSC 144; , 418 U.S. 323, 347 (1974).
  USSC 27; 424 U.S. 448 (1976).
 Restatement (Second) of Torts § 581 A (1977) ["One who publishes a defamatory statement of fact is not subject to liability for defamation if the statement is true."]
 See, e.g., Corabi v. Curtis Publishing Co., 441 Pa. 432, 450, 273 A.2d 899, 908 (1971); For other cases, see R. Smolla, THE LAW OF DEFAMATION (2d ed.), § 5:8, n. 4.
  USSC 73; 475 U.S. 767 (1986).
 See Bose Corp. v. Consumers Union of United States, Inc.,  USSC 87; 466 U.S. 485, 503, 10 Media L. Rep. (BNA) 1625 (1984) (Expressing the need to protect "the freedom to speak one's mind ... [as] as aspect of individual liberty - and thus a good unto itself" and the societal value of "the common quest for truth and the vitality of society as a whole."
 See Post, The Constitutional Concept of Public Discourse: Outrageous Opinion, Democratic Deliberation, and Hustler Magazine v. Falwell, 103 HARV. L. REV. 603 (1990).
 For a bibliography on the pre-1990 literature on the fact-opinion distinction, see Rodney W. Ott, Fact and Opinion in Defamation: Recognizing the Formative Power of Context, 58 FORDHAM L. REV. 761, n. 1.
  USSC 144; 418 U.S. 323 (1974).
  USSC 117; 497 U.S. 1, 111 L. Ed. 2d 1, 110 S. Ct. 2695 (1990). See, also, R. Sack, SACK ON DEFAMATION, § 4.2.1.
 See Hill, Defamation and Privacy under the First Amendment, 76 COLUM. L. REV. 1205, 1227-36 (1976). Cited in Sack & Baron, Libel, Slander and Related Problems, Practicing Law Institute, New York City, 2d ed., 1994), § 4.2.2.
 See e.g. Salinger v. Cowles, 195 Iowa 873, 889-90, 191 N.W. 167, 174 (1922).
  USSC 144; 418 U.S. 323 (1974).
 Gertz USSC 144; , 418 U.S. 323, 339-40 (1974).
 See Hustler Magazine, Inc. v. Falwell,  USSC 24; 485 U.S. 46, 51 (1988); Bose Corp. v. Consumers Union of United States, Inc.,  USSC 87; 466 U.S. 485, 504 (1984).
 SACK ON DEFAMATION, § 4.2.3 ["By 1990 every federal circuit and the courts of at least thirty-six states and the District of Columbia had held that opinion is constitutionally protected because, according to Gertz, '[u]nder the First Amendment there is no such thing as a false idea.'"]; Ollman v. Evans,  USCADC 463; 750 F.2d 970, 975 ["Gertz's implicit command thus imposes upon both state and federal courts the duty as a matter of constitutional adjudication to distinguish facts from opinions in order to provide opinions with the requisite, absolute First Amendment protection."]; Potomac Valve & Fitting, Inc. v. Crawford Fitting Co.,  USCA4 1544; 829 F.2d 1280, 1286 (4th Cir. 1987) ("The constitutional disticntion between fact and opinion is now firmly established in the case law of the circuits.")
 RESTATEMENT (SECOND) OF TORTS, § 566, and comment c (1977). The Restatement defined "pure opinions" as those that "do not imply facts capable of being proved true or false."
  USCADC 463; 750 F.2d 970, 979 USCADC 463; , 11 Media L. Rep. 1433 (D.C. Cir. 1984), cert. denied USSC 136; , 471 U.S. 1127, 11 Media L. Rep. (BNA) 2015 (1985).
 Ollman USCADC 463; , 750 F.2d 970, 979-85.
  USSC 117; 497 U.S. 1, 19 (1990).
 Milkovich USSC 117; , 497 U.S. 1, 7-10 (1990).
 Scott v. News-Herald, 496 N.E.2d 699, 709 (Ohio 1986) ["[I]t has been decided as a matter of law, that the article in question was constitutionally protected opinion."]
 Milkovich v. News Herald, 46 Ohio App. 3d 20, 23, 545 N.E.2d at 1324.
 Milkovich USSC 117; , 497 U.S. 1, 10 (1990).
 Milkovich USSC 117; , 497 U.S. 1, 9 (1990).
 Gertz v. Robert Welch, Inc.,  USSC 144; 418 U.S. 323, at 339-40 ["Under the First Amendment there is no such thing as a false idea. However pernicious an opinion may seem, we depend for its correction not on the conscience of judges and juries but on the competition of other ideas. But there is no constitutional value in false statements of facts. Neither the intentional lie or the careless error materially advances society's interest in 'uninhibited, robust, and wide-open" debate on public issues."]
 418 U.S. at 18 ["We do not think this passage from Gertz was intended to create a wholesale defamation exemption for anything that might be labeled 'opinion.'"
 418 U.S. at 18.
 Milkovich USSC 117; , 497 U.S. 1, 13 (1990).
  USSC 73; 475 U.S. 767 (1986).
 497 U.S., at 20.
 Milkovich, at 20, 21 (quoting Hustler Magazine, Inc. v. Falwell,  USSC 24; 485 U.S. 46, 50 (1988). Hustler Magazine, Inc. v. Falwell,  USSC 24; 485 U.S. 46, 57 [stating that a parody of Rev. Falwell was not actionable, because it was "not reasonably believable."]
  USSC 24; 485 U.S. 46, 99 L. Ed. 2d 41, 108 S. Ct. 876 (1988).
  USSC 109; 398 U.S. 6, 26 L. Ed. 2d 6, 90 S. Ct. 1537 (1970).
 Milkovich USSC 117; , 497 U.S. 1, 20, 21 (1990). The third rule identified by the Court was the fault requirements of New York Times, Butts, and Gertz. The fourth rule was the appellate review standard established in New York Times Co. v. Sullivan, and reaffirmed in Bose Corp. v. Consumers Union of United States,  USSC 87; 466 U.S. 485 (1984), which requires an appellate court to make an independent review of the finding of actual malice, when that standard is required. Milkovich USSC 117; , 497 U.S. 1, 21 (1990).
 Milkovich USSC 117; , 497 U.S. 1, 20 (1990). Citing Philadelphia Newspapers, Inc. v. Hepps,  USSC 73; 475 U.S. 767 (1986).
 Milkovich USSC 117; , 497 U.S. 1, 21, 22 (1990).
 Milkovich USSC 117; , 497 U.S. 1, 22-23 (1990).
 A logic bomb is "a section of code, preprogrammed into a larger program, that waits for a trigger event to perform a harmful function. Logic bombs do not reproduce and are therefore not viral, but a virus may contain a logic bomb as a payload." Peter Szor, THE ART OF COMPUTER VIRUS RESEARCH AND DEFENSE (2005), at 30.
 A Trojan horse is a program that appears to be beneficial, but contains a harmful payload. Peter Szor, THE ART OF COMPUTER VIRUS RESEARCH AND DEFENSE (2005), at 663.
 A trapdoor, or backdoor, is a function built into a program or system to allow unauthorized access to the system. Peter Szor, THE ART OF COMPUTER VIRUS RESEARCH AND DEFENSE (2005), at 643. See, also, DOROTHY E. DENNING and PETER J. DENNING, INTERNET BESIEGED (ACM Press, New York, 1998), at 75-78.
 FRED COHEN, COMPUTER VIRUSES. PhD dissertation, University of Southern California (1985).
 See R. Lehtinen et al., COMPUTER SECURITY BASICS (O'Reilly, 2006), at 83 ["[T]he roots of the modern computer virus go back to 1949. This was when computer pioneer John Von Neumann presented a paper on the 'Theory and Organization of Complicated Automata,' in which he postulated that a computer program could reproduce."] See also Jeffrey O. Kephart et al., Fighting Computer Viruses, SCIENTIFIC AMERICAN, November 1997; DOROTHY E. DENNING and PETER J. DENNING, INTERNET BESIEGED (ACM Press, New York, 1998), at 74.
 JOHN MACAFEE AND COLIN HAYNES, COMPUTER VIRUSES, WORMS, DATA DIDLERS, KILLER PROGRAMS, AND OTHER THREATS TO YOUR SYSTEM, at 26; FREDERICK B. COHEN, A SHORT COURSE ON COMPUTER VIRUSES (Wiley, 1994, 2d ed.), at 1-2. In his PhD dissertation, Dr. Cohen defined a virus simply as any program capable of self-reproduction. This definition appears overly general. A literal interpretation of the definition would classify even programs such as compilers and editors as viral. DOROTHY E. DENNING and PETER J. DENNING, INTERNET BESIEGED (ACM Press, New York, 1998), at 75.
 LANCE J. HOFFMAN (ed.), ROGUE PROGRAMS: VIRUSES, WORMS, TROJAN HORSES (Van Nostrand Reinhold, 1990), at 247 ("The ability to propagate is essential to a virus program"); DOROTHY E. DENNING and PETER J. DENNING, INTERNET BESIEGED (ACM Press, New York, 1998), at 73-75; David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 87 ["The infection mechanism is the code that allows the virus to reproduce and infect a target host, and thus to be a virus." (Emphasis added.)
 Viruses, known as sparse infectors, may try to slow down the rate of infection to avoid detection, while fast infectors, on the other hand, may attempt to infect as many hosts as possible. See David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 87.
 There are three mechanisms through which a virus can infect a host program. A virus may attach itself to its host as a shell, an add-on, or as intrusive code. A shell virus forms a layer ("shell") around the host code, so that the latter effectively becomes an internal subroutine of the virus. The host program is then replaced by a functionally equivalent program that includes the virus. The virus executes first, and then allows the host code to execute. Boot program viruses are typically shell viruses. Most viruses are of the add-on variety. They become part of the host by appending, or prepending, their code to the host code, without altering the host code. The viral code may alter the order of execution, allowing itself to execute first and then the host code. Macro viruses are typically add-on viruses. Intrusive viruses, in contrast, overwrite some or all of the host code, replacing it with its own code. See, e.g., DOROTHY E. DENNING and PETER J. DENNING, INTERNET BESIEGED (ACM Press, New York, 1998), at 81; PHILIP FRITES, PETER JOHNSTON AND MARTIN KRATZ, THE COMPUTER VIRUS CRISIS (Van Nostrand Reinhold, New York, 2d ed., 1992), at 73-75.
 The capability to change its form is know as polymorphism. To detect polymorphic viruses requires a more complex algorithm than simple pattern matching. See, e.g., DOROTHY E. DENNING and PETER J. DENNING, INTERNET BESIEGED (ACM Press, New York, 1998), at 89. See, also, David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 87-88.
 The execution of a host may be triggered by human intervention, such as when a user double-clicks on an infected e-mail attachment. See Ed Skoudis, MALWARE FIGHTING MALICIOUS CODE (Prentice Hall, 2004), at 26, 27.
 Peter Szor, THE ART OF COMPUTER VIRUS RESEARCH AND DEFENSE (2005), Chapter 10.
 ICSA Labs 10th Annual Computer Virus Prevalence Survey 2004, Table 5 and Fig. 10, p. 15.
 A. Bisset and G. Shipton, Some Human Dimensions of Computer Virus Creation and Infection, 52 INTERNATIONAL J. HUM. COMPUTER STUD. (2000), 899; R. Ford, No Surprises in Melissa Land, 18 COMPUTERS AND SECURITY, 300-302.
 David Harley et al., VIRUSES REVEALED UNDERSTAND AND COUNTER MALICIOUS SOFTWARE (Osborne/McGraw-Hill, 2001), 406-410.
 JAN HRUSKA, COMPUTER VIRUSES AND ANTI-VIRUS WARFARE, (Ellis Horwood Ltd., 1990), at 17, 18. (In addition to self-replicating code, viruses often also contain a payload. The payload is capable of producing malicious side-effects.) See, also, FREDERICK B. COHEN, A SHORT COURSE ON COMPUTER VIRUSES (Wiley, 1994, 2d ed.), at 8-15 (examples of malignant viruses and what they do.); JOHN MACAFEE AND COLIN HAYNES, COMPUTER VIRUSES, WORMS, DATA DIDLERS, KILLER PROGRAMS, AND OTHER THREATS TO YOUR SYSTEM, at 61.
 Ed Skoudis, MALWARE FIGHTING MALICIOUS CODE (Prentice Hall, 2004), at 27; David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 88, 89; Meiring de Villiers, Computer Viruses and Civil Liability: A Conceptual Framework, TORT TRIAL AND INSURANCE PRACTICE LAW JOURNAL, Fall 2004 (40:1), 123, 172 [Discussion of damage due to virus infection.]
 JOHN MACAFEE AND COLIN HAYNES, COMPUTER VIRUSES, WORMS, DATA DIDLERS, KILLER PROGRAMS, AND OTHER THREATS TO YOUR SYSTEM, at 61. See, also, Peter Szor, THE ART OF COMPUTER VIRUS RESEARCH AND DEFENSE (2005) [Describing "data diddlers" as viruses that "do not destroy data all of a sudden in avery evident form, ... but slowly manipulate the data, such as the content of the hard disk."]
 E.J. Sinrod and W.P. Reilly, Cyber-Crimes: A Practical Approach to the Application of Federal Computer Crimes Laws, 16 SANTA CLARA COMPUTER & HIGH TECH. L.J. 117, 218 (describing the W95.LoveSong.998 virus, designed to trigger a love song on a particular date.)
 See, e.g., Eric J. Sinrod and William P. Reilly, Cyber Crimes A Practical Approach to the Application of Federal Computer Crime Laws, 16 SANTA CLARA COMPUTER & HIGH TECH. L. J. 177 (2000), at 217, n. 176.
 See Meiring de Villiers, Free Radicals in Cyberspace: Complex Liability Issues in Information Warfare, 4 NORTHWESTERN TECH & IP L. J. 13 (Fall, 2005).
 Viruses can cause economic losses by replicating and spreading, such as filling up available memory space, slowing down the execution of important programs, and locking keyboards. The Melissa virus, for instance, mailed copies of itself to everyone in the victim's e-mail address book, resulting in clogged e-mail servers and even system crashes. See, e.g., PHILIP FRITES, PETER JOHNSTON AND MARTIN KRATZ, THE COMPUTER VIRUS CRISIS (Van Nostrand Reinhold, New York, 2d ed., 1992), 23-4 ("The Christmas card [virus] stopped a major international mail system just by filling up all available storage capacity."); David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 88 ["A virus does not necessarily need to have either a trigger or payload. A virus with a trigger and payload but no replication mechanism, on the other hand, is not a virus, but may be described as a Trojan."]
 See, generally, John F. Schoch and Jon A. Hupp, The "Worm" Programs - Early Experience with a Distributed Computation, COMM. ACM, Vol 25, No 3, March 1982, 172.
 See David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 347-52.
 Takanen et al., Running Malicious Code By Buffer Overflows: A Survey of Publicly Available Exploits, 162. EICAR 2000 Best Paper Proceedings. ("The day when the world finally acknowledged the risk entailed in overflow vulnerabilities and started coordinating a response to them was the day when the Internet Worm was introduced, spread and brought the Internet to its knees.") Available at http://www.papers.weburb.dk.
 See Milkovich v. Lorain Journal Co. et al. USSC 117; , 497 U.S. 1, 21-22 [Stating that it can be objectively verified whether an individual had perjured himself, by looking at evidence of contradictions in his testimony, trial transcripts, and the testimony of other witnesses.]; Boule v. Hutton, 138 F.Supp. 2d 491, 504 (S.D.N.Y. 2001) ["Alleged defamatory statements should not be read in isolation, but should be reviewed within whole context of the publication as the average, reasonable, intended reader would."]
 See Peter Szor, THE ART OF COMPUTER VIRUS RESEARCH AND DEFENSE (2005), p. Ch. 11; Ed Skoudis, MALWARE FIGHTING MALICIOUS CODE (Prentice Hall, 2004), 51-64; David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), Ch. 6; DOROTHY E. DENNING and PETER J. DENNING, INTERNET BESIEGED (ACM Press, New York, 1998), at 90-93; KEN DUNHAM, BIGELOW'S VIRUS TROUBLESHOOTING POCKET REFERENCE, (McGraw-Hill 2000), at 78-83 and 102-108.
 Virus-specific technology, such as signature scanners, detect known viruses by indentifying patterns that are unique to each virus strain. It identifies the specific strain it has detected.
 Generic anti-virus technology detects the presence of a virus by recognizing generic virus-like behavior, usually without identifying the particular strain. Integrity checkers and activity monitors are generic detectors.
 JAN HRUSKA, COMPUTER VIRUSES AND ANTI-VIRUS WARFARE (Ellis Horwood, Ltd., 1990), at 42.
 See Ed Skoudis, MALWARE FIGHTING MALICIOUS CODE (Prentice Hall, 2004), 53, 54 ["The antivirus vendors collect virus specimens and 'fingerprint' them."
 JEFFREY O. KEPHART ET AL., Automatic Extraction of Computer Virus Signatures, Proceedings of the 4th Virus Bulletin International Conference, R. Ford, ed., Virus Bulletin Ltd., Abingdon, England, 1994, pp. 179-194, at 2.
 JAN HRUSKA, COMPUTER VIRUSES AND ANTI-VIRUS WARFARE (Ellis Horwood, Ltd., 1990), at 42. For short descriptions and hexadecimal patterns of selected known viruses, see HRUSKA at 43-52; JEFFREY O. KEPHART ET AL., Blueprint for a Computer Immune System, IBM Thomas J. Watson Research Center Report, at 11 ("[A] signature extractor must select a virus signature carefully to avoid both false negatives and false positives. That is, the signature must be found in every instance of the virus, and must almost never occur in uninfected programs.")
 Jeffrey O. Kephart et al., Automatic Extraction of Computer Virus Signatures, Proceedings of the 4th Virus Bulletin International Conference, R. Ford, ed., Virus Bulletin Ltd., Abingdon, England, 1994, pp. 179-194.
 Ed Skoudis, MALWARE FIGHTING MALICIOUS CODE (Prentice Hall, 2004), 54.
 ROBERT SLADE, ROBERT SLADE'S GUIDE TO COMPUTER VIRUSES (Springer, 2d ed., 1996), at 215 [False positives are comparatively rare in virus scanners. The can occur, however, if the digital signature for a given virus is not well chosen.] ; Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 576.
 KEN DUNHAM, BIGELOW'S VIRUS TROUBLESHOOTING POCKET REFERENCE, (McGraw-Hill 2000), at 78-83; Jeffrey O. Kephart et al., Fighting Computer Viruses, SCIENTIFIC AMERICAN, November 1997. See, also, Sandeep Kumar and Eugene H. Spafford, A Generic Virus Scanner in C++, Technical report CSD-TR-92-062, Dept. of Computer Science, Indiana University, at 6-8.
 See, e.g., Pete Lindstrom, The Hidden Costs of Virus Protection, Spire Research Report, June 2003, at 5 ("In this day of 80,000+ known viruses and frequent discovery of new ones, the size of the signature file can be large, particularly if the updates are sent out as cumulative ones. Large updates can clog the network pipelines ... and reduce the frequency that an administrator will push them out to the end users.")
 Sandeep Kumar and Eugene H. Spafford, A Generic Virus Scanner in C++, Technical report CSD-TR-92-062, Dept. of Computer Science, Indiana University, at 3-4.
 Using one program to delete another, formatting a floppy disk, and boot sector changes resulting from upgrading the operating system are all "legitimate" operations that may trigger false alarms. See ROBERT SLADE, ROBERT SLADE'S GUIDE TO COMPUTER VIRUSES (Springer, 2d ed., 1996), at 40-41.
 JAN HRUSKA, COMPUTER VIRUSES AND ANTI-VIRUS WARFARE, (Ellis Horwood Ltd., 1990), at 75.
 PHILIP FRITES, PETER JOHNSTON AND MARTIN KRATZ, THE COMPUTER VIRUS CRISIS (Van Nostrand Reinhold, New York, 2d ed., 1992), Figures 5.2-5.5, at 69-76; KEN DUNHAM, BIGELOW'S VIRUS TROUBLESHOOTING POCKET REFERENCE, (McGraw-Hill 2000), at 79. See, also, Sandeep Kumar and Eugene H. Spafford, A Generic Virus Scanner in C++, Technical report CSD-TR-92-062, Dept. of Computer Science, Indiana University, at 5-6.
 See Ed Skoudis, MALWARE FIGHTING MALICIOUS CODE (Prentice Hall, 2004), 58. [Integrity verification procedures can be used in antivirus software to detect viral infection. If a file has been inexplicably modified, then the file may be infected, and the antivirus program should take a closer look at it.]
 PHILIP FRITES, PETER JOHNSTON AND MARTIN KRATZ, THE COMPUTER VIRUS CRISIS (Van Nostrand Reinhold, New York, 2d ed., 1992), at 125; Ed Skoudis, MALWARE FIGHTING MALICIOUS CODE (Prentice Hall, 2004), 58.
 ROBERT SLADE, ROBERT SLADE'S GUIDE TO COMPUTER VIRUSES (Springer, 2d ed., 1996) 157.
 A Microsoft Word document can, for instance, be expected to change when a user edits it, but modification in the case of a macro is much more suspicious.
 Diomidis Spinellis, Reliable Identification of Bounded-Length Viruses is NP-Complete, IEEE TRANSACTIONS ON INFORMATION THEORY, 49(1), 280, 282 (January 2003) (Stating that theoretically perfect detection is in the general case undecidable, and for known viruses, NP-complete.); Carey Nachenberg, Future Imperfect, VIRUS BULLETIN, August 1997, 6. See, also, Francisco Fernandez, Heuristic Engines, Proceedings of the 11th International Virus Bulletin Conference, September 2001, Virus Bulletin Ltd., Abingdon, England, 1994, pp. 407-444; Chess & White, Undetectable Computer Virus, IBM Research Paper, at http://www.research.ibm.com/antivirus/SciPapers/VB2000DC.htm.
 Ed Skoudis, MALWARE FIGHTING MALICIOUS CODE (Prentice Hall, 2004), 56.
 Francisco Fernandez, Heuristic Engines, Proc. 11th Intl. Virus Bulletin Conference, September 2001, Virus Bulletin Ltd., Abingdon, England, 1994, at 409 ["Many genuine programs use sequences of instructions that resemble those used by viruses. Programs that use low-level disk access methods, TSRs, encryption utilitiies, and even anti-virus packages can all, at time, carry out tasks that are performed by viruses."]
 Carey Nachenberg, Future Imperfect, VIRUS BULLETIN, August 1997, at 7.
 Sandeep Kumar and Eugene H. Spafford, A Generic Virus Scanner in C++, Technical report CSD-TR-92-062, Dept. of Computer Science, Indiana University, at 4-5 ("Detection by static analysis/policy adherence.")
 The CPU, or central processing unit, of a computer is responsible for data processing and computation. See, e.g., JAN HRUSKA, COMPUTER VIRUSES AND ANTI-VIRUS WARFARE, (Ellis Horwood Ltd., 1990), at 115; D. BENDER, COMPUTER LAW: EVIDENCE AND PROCEDURE (1982), §2.02, at 2-7, -9.
 Sandeep Kumar and Eugene H. Spafford, A Generic Virus Scanner in C++, Technical report CSD-TR-92-062, Dept. of Computer Science, Indiana University, at 4.
 See e.g. Ed Skoudis, MALWARE FIGHTING MALICIOUS CODE (Prentice Hall, 2004), 58 ["The main limitation of the integrity verification method is that it detects the infection only after it occurs."]
 Polymorphic viruses have the ability to "mutate" by varying the code sequences written to target files. To detect such viruses requires a more complex algorithm than simple pattern matching. See, e.g., DOROTHY E. DENNING and PETER J. DENNING, INTERNET BESIEGED (ACM Press, New York, 1998), at 89.
 Carey Nachenberg, Future Imperfect, VIRUS BULLETIN, August 1997, at 9.
  USSC 40; 376 U.S. 254 (1964).
  USSC 144; 418 U.S. 323 (1974).
  USSC 73; 475 U.S. 767 (1986).
  USSC 117; 497 U.S. 1, 19 (1990).
 R. Smolla, THE LAW OF DEFAMATION (2d ed.), § 5:10. ["The better view is that the first amendment's protection of truth, like its protection of opinion, stands on its own footing, and is analytically distinct from fault rules. ... Just as under the first amendment there is 'no such thing' as a false idea, there should be 'no such thing' as liability for defamation for speaking the truth."]; R. Sack, SACK ON DEFAMATION, § 22.214.171.124 ["The Supreme Court has not decided whether the Constitution permits liability for truthful speech that fails the 'public concern' test, is not contained in the media, or both. But open or not, the question is largely academic. Even if courts may impose such liability, in practice they do not." See also Dun & Bradstreet, Inc. v. Greenmoss Builders, Inc.,  USSC 173; 472 U.S. 749, 783-84, 11 Media L. Rep. [BNA] 2417 (1985), [Majority of justices find no constitutional basis for a press/non-press distinction in defamation cases.] See also See also, First Nat'l. Bank v. Bellotti,  USSC 138; 435 U.S. 765, 801, 3 Media L. Rep. (BNA) 2105 (1978). Lower courts routinely follow Hepps in non-media cases. See Burroughs v. FFP Operating Partners, L.P.,  USCA5 2122; 28 F.3d 543 (5th Cir. 1994).
 Vachet v. Central Newspapers, Inc.,  USCA7 295; 816 F.2d 313, 316 (7th Cir. 1987); Zerangue v. TSP Newspapers, Inc.,  USCA5 494; 814 F.2d 1066, 1073 (5th Cir. 1987) ["Truth is a defense to libel ... A publication is also protected if it is 'substantially true,' i.e., if it varies from the truth only in insignificant details or if its 'gist' or 'sting' is true."]; Guccione v. Hustler Magazine, Inc., 800 F.2d 298, 301 (2d Cir. 1986) ["'Substantial truth' suffiices to defeat a charge of libel.")
 R. Smolla, THE LAW OF DEFAMATION (2d ed.), § 5:14.
 Masson v. New Yorker Magazine,  USSC 111; 501 U.S. 496, 516-17 (1991) ["The common law of libel takes but one approach to the question of falsity, regardless of the form of the communication ... It overlooks minor inaccuracies and concentrates upon substantial truth."]
  USCADC 425; 746 F.2d 1563, 1568 n. 6 (D.C. Cir. 1984).
 Subsequent courts have adopted Judge Scalia's example in applying the substantial truth doctrine. See, e.g., Moldea v. New York Times Co.,  USCADC 183; 22 F.3d 310, 319 (D.C. Cir. 1994).
 Masson v. New Yorker Magazine, Inc.,  USSC 111; 501 U.S. 496, 517 (1991); Gomba v. McLaughlin, 180 Colo. 232, 236, 504 P2d 337, 339 ["The question, a factual one, is whether there is a substantial difference between the allegedly libelous statement and the truth; or stated differently, whether the statement produces a different effect upon the reader than that which would be produced by the literal truth of the matter."]; R. Smolla, THE LAW OF DEFAMATION (2d ed.), § 5:24 [A statement is not substantially true if the sting of the statement is worse than the exact truth.]
 Franklin & Bussel, The Plaintiff's Burden in Defamation: Awareness and Falsity, 25 WM. & MARY L. REV. 825, 828 (1984).
 PROSSER & KEETON ON TORTS § 116, at 782-83 (5th ed. 1984) ["[I]t remains a question for the court whether the meaning claimed might reasonably be conveyed, and for the jury whether it was so understood."]
 PROSSER & KEETON ON TORTS § 116, at 781 (5th ed. 1984) ["If the language used is open to two meanings, as in the case of the French word 'cocotte', which ... signifies either a prostitute or a poached egg, it is for the jury to determine whether the defamatory sense was the one conveyed."]
 The Restatement (Second), Torts, § 617(b) (1977) ("Subject to the control of the court whenever the issue arises, the jury determines whether ... the matter was true or false ..."); Templeton v. Rogers (1970, Tex. Civ. App. Beaumont) 450 SW2d 900, 901 ("We hold, therefore, that the jury's finding of the truth of the statements made is a defense to the libel claim, despite the other apparently inconsistent findings and the award of damages.")
  USCA5 829; 708 F.2d 944 (5th Cir. 1983).
 Golden Bear, at 749.
 Golden Bear, at 749.
 The national information infrastructure is an interrelated system of computer and communication networks that control and coordinate essential infrastructures, such as water supplies, banking and financial services, telecommunications services, and electrical power. It also includes computer networks that coordinate and control military communications and logistics. The private sector plays a dominant role in the critical information infrastructure. Most infrastructures are owned by the private sector and the Defense Information Systems Agency depends heavily on commercial communication networks. See e.g. CENTER FOR STRATEGIC AND INTERNATIONAL STUDIES, CYBERCRIME, CYBERTERRORISM, CYBERWARFARE: AVERTING AN ELECTRONIC WATERLOO, xiv-xv, Stanford University (1998).
 See, e.g., See R. Lehtinen et al., COMPUTER SECURITY BASICS (O'Reilly, 2006), at 27 ["In the 2000s, particularly after the attacks of 9/11, security took on a serious tone. Corporations and government alike became more willing to make security an integral part of their products and their jobs."] See, also, R. Lehtinen et al., COMPUTER SECURITY BASICS (O'Reilly, 2006), at 27 ["The challenge of this decade will be to consolidate what we have learned - to build computer security into our products and our daily routines, to protect data without unnecessarily impeding our ability to access it, and to make sure that both security products and government and industry standards grow to meet the ever-increasing scope and challenges of technology."]
 See e.g., Matt Bishop, INTRODUCTION TO COMPUTER SECURITY, (Pearson Education, 2005), at 1-6; R. Lehtinen et al., COMPUTER SECURITY BASICS (O'Reilly, 2006), at 9.
 See R. Lehtinen et al., COMPUTER SECURITY BASICS (O'Reilly, 2006), at 9 ["Data is confidential if it stays obscure to all but those authorized to use it."]
 See R. Lehtinen et al., COMPUTER SECURITY BASICS (O'Reilly, 2006), at 11.
 See R. Lehtinen et al., COMPUTER SECURITY BASICS (O'Reilly, 2006), at 9.
 See David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 97 ["Direct damage can be considered in terms of the classic tripartite model (namely) Availability, Integrity, Confidentiality. Viruses ... have an impact across all three areas described by this model, as well as other areas, such as accountability."]
 See, e.g., David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 7 ["Only the presence of the infection mechanism is mandatory if the program is to be defined as viral. Payload and trigger are optional."]
 Ths section argues that a virus is capable of threatening all aspects of information security through its infection module alone.
 See Ed Skoudis, MALWARE FIGHTING MALICIOUS CODE (Prentice Hall, 2004), 34 [A virus could "steal files from your machine, especially sensitive ones containing personal, financial, or orher sensitive information."] Viruses also monitor user keystrokes and transmit information about the user's computing habits, Web sites visited, and financial information to the attacker. See Ed Skoudis, MALWARE FIGHTING MALICIOUS CODE (Prentice Hall, 2004), at 3.
 BBC News Bulletin, Virus Makes Unwelcome Return, 5 June, 2003. Available at http://news.bbc.co.uk/1/hi/technology/2965924.stm. See also Chariot Security Information, Jan. 25, 2007 [Describing the W32.Sobig and Klez worms, which have been programmed to steal confidential information on infected machines.] Available at http://www.chariot.net.au/; Gregg Keizer, Virus Posing as Microsoft e-Mail Spreads Fast, InformationWeek, Sept. 19, 2003 [Describing a fast-spreading worm which attempts to steal confidential information from infected systems.] Available at http://www.informationweek.com/.
 Peter Szor, THE ART OF COMPUTER VIRUS RESEARCH AND DEFENSE (2005), at 309.
 See Computer Virus, The Columbia Encyclopedia, Sixth Ed. ["Although some viruses are merely disruptive, others can destroy or corrupt data or cause an operating system or applications program to malfunction."] Available at http://www.bartleby.com/65/co/computer-vir.html.
 BBC News Bulletin, 21 January, 2003, Computer Virus Author Jailed, Available at http://news.bbc.co.uk/2/hi/uk_news/wales/2678773.stm.
 Peter Szor, THE ART OF COMPUTER VIRUS RESEARCH AND DEFENSE (2005), at 296, 297.
 Peter Szor, THE ART OF COMPUTER VIRUS RESEARCH AND DEFENSE (2005), at 297.
 See David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 94 ["Network and mail viral programs carry, in a sense, their own payloads. The reproduction of the programs themselves use the resources of the hosts affected and, in the cases of both the Morris Internet and CHRISTMA worms, went so far as to deny service by using all available computing or communications resources."]; Greg Hoglund and Gary McGraw, EXPLOITING SOFTWARE: HOW TO BREAK CODE (Pearson, 2004), at 20 ["Worms allow an attacker to carpet bomb a network in an unbridled exploration that attempts to exploit a given vulnerability as widely as possible. This amplifies the overall effect of an attack and achieves results that could never be obtained by manually hacking one machine at a time."] See, also, Peter Szor, THE ART OF COMPUTER VIRUS RESEARCH AND DEFENSE (2005), at 306-307.
 See S. Gibson, The Strange Tale of the Denial of Service Attacks against GRC.COM, at http://grc.com/dos/grcdos.htm.
 The attacks occurred shortly after Microsoft had discovered the vulnerability and issued a patch to fix it. Microsoft, A Very Real and Present Threat to the Internet. http://www.microsoft.com/technet/treeview/default.asp?url=/technet/security/topics/codealrt.asp.
 K.J. Houle, Trends in Denial of Service Attack Technology, CERT COORDINATION CENTER (October 2001), 19.
 Peter Szor, THE ART OF COMPUTER VIRUS RESEARCH AND DEFENSE (2005), at 306.
 BBC News Bulletin, 4 May, 2004, Worm Brings Down Coastguard PCs, Available at http://news.bbc.co.uk/2/hi/technology/3682803.stm.
 R. Smolla, THE LAW OF DEFAMATION (2d ed.), § 5:19 ["When the defamatory allegation is narrow and specific, the evidence of truth must more strictly conform to the allegation."]
 Roper v. Mabry, 15 Wash. App. 819, 551 P.2d 1381 (1976).
 See Barlow v. International Harvester Co., 95 Idaho 881, 522 P.2d 1102 (1974) [Proof of one criminal act does not jsutify charge alleging commission of a different crime.]
 See R. Smolla, THE LAW OF DEFAMATION (2d ed.), § 5:19.
 The defamation defendant does, of course, not bear the burden of proof of falsity, but may choose to plead a truth defense.
 See R. Smolla, THE LAW OF DEFAMATION (2d ed.), § 5:19. See also Dan B. Dobbs, THE LAW OF TORTS (Practitioner Treatise Series, Vol. 2, 2001), at 1148 ["[I]f (a) the publication states facts similar to the truth and (b) the sting of the publication is substantially equivalent to the sting of the truth, the truth defense should ordinarily apply."]
 Masson v. New Yorker Magazine, Inc.,  USSC 111; 501 U.S. 496, 517 (1991). See also Chung v. Better Health Plan, 1997 U.S. Dist. LEXIS 9627 (S.D.N.Y. 1997).
 A logic bomb lies dormant until an event, such as a pre-programmed date or time is reached. It then activates and executes a payload, but it cannot replicate and spread. See Peter Szor, THE ART OF COMPUTER VIRUS RESEARCH AND DEFENSE (2005), at 30. [A logic bomb always has a payload, but unlike a virus, it has no infection module.] A logic bomb has been compared to a real-world landmine. See TECH FAQ, What is a Logic Bomb?, Available at http:www.tech-faq.com.
 See Masson v. New Yorker Magazine, Inc.,  USSC 111; 501 U.S. 496, 516-17 (1991) ["A technically false statement may nonetheless be considered substantially true if, viewed 'through the eyes of the average reader,' it differs from the truth 'only in insignificant details.'"]
 FREDERICK B. COHEN, A SHORT COURSE ON COMPUTER VIRUSES (Wiley, 1994, 2d ed.), at 25. See, also, Clive Gringras, THE LAWS OF THE INTERNET (Butterworths, 1997), at 58 ("A computer harbouring a virus can, in a matter of hours, spread across continents, damaging data and programs without reprieve.") See, also, Bradley S. Davis, It's Virus Season Again, Has Your Computer Been Vaccinated? A Survey of Computer Crime Legislation as a Response to Malevolent Software, 72 WASHINGTON LAW QUARTERLY, 379, 437 n. 225 and accompanying text ("[A] user whose computer was infected could connect to an international network such as the Internet and upload a file onto the network that contained a strain of malevolent software. If the software was not detected by a scanning system ... on the host computer, infection could spread throughout the Internet through this simple exchange of data."; How Fast a Virus Can Spread, in PHILIP FITES, PETER JOHNSTON AND MARTIN KRATZ, THE COMPUTER VIRUS CRISIS (Van Nostrand Reinhold, New York, 2d ed., 1992), at 21.
 ICSA Labs 9th Annual Computer Virus Prevalence Survey 2003, at 25.
 See Barlow v. International Harvester, Inc., 95 Idaho 881, 522 P.2d 1102 (1974) (Proof that plaintiff had committed one crime does not justify a false allegation that he had committed a different crime.); PROSSER & KEETON ON TORTS § 116, at 841 (5th ed. 1984).
 See Milkovich v. Lorain Journal Co. et al. USSC 117; , 497 U.S. 1, 21-22 [Stating that it can be objectively verified whether an individual had perjured himself, by looking at evidence of contradictions in his testimony, trial transcripts, and the testimony of other witnesses.]; Boule v. Hutton, 138 F.Supp. 2d 491, 504 (S.D.N.Y. 2001) ["Alleged defamatory statements should not be read in isolation, but should be reviewed within whole context of the publication as the average, reasonable, intended reader would."]
 David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 9.
 An FTP, or File Transfer Protocol, is a program that connects two computers, so that files can be transferred between them.
 David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 8, 144.
 A PC-specific program infected by a PC-specific file virus, resident in an e-mail attachment, on a Macintosh, for instance, may be transmitted to a PC, where it could execute.
 See e.g., Matt Bishop, INTRODUCTION TO COMPUTER SECURITY, (Pearson Education, 2005), at 4 ["A threat is a potential violation of security. The violation need not actually occur for there to be a threat. The fact that the violation might occur means that those actions that could cause it to occur must be guarded against (or prepared for.) Those actions are called attacks. Those who execute such actions, or cause them to be executed, are called attackers."]
 See Restatement (Second) of Torts, § 581A, comment g (1982). ["The truth of a defamatory imputation of fact must be determined as of the time of the defamatory publication."]
 See ROBERT SLADE, ROBERT SLADE'S GUIDE TO COMPUTER VIRUSES (Springer, 2d ed., 1996), at 215.
 This false positive problem can be avoided if producers of anti-virus scanners properly encrypted all their virus signatures in all parts of the program being scanned, the scanning engine, and in the virus definition files. See Andreas Marx, Anti-Virus vs. Anti-Virus: False Positives in AV Software, VIRUS BULLETIN (October 2003), 17-18.
 Andreas Marx reports the following ghost positive incident. The anti-virus software, AntiVir, was written to disinfect systems infected with the worm Win32/Qaz. Its disinfection routine included storing the strings "StartIE" and "qazwsx.hsq" in plain text to delete keys created by Win32/Qaz. The presence of these strings was detected by another anti-virus product, namely Network Associates' VirusScan, which flagged AntiVir as a possible variant of the Win32/Qaz worm. Andreas Marx, Anti-Virus vs. Anti-Virus: False Positives in AV Software, VIRUS BULLETIN (October 2003), at 18. See also David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 57, 78.
 PHILIP FRITES, PETER JOHNSTON AND MARTIN KRATZ, THE COMPUTER VIRUS CRISIS (Van Nostrand Reinhold, New York, 2d ed., 1992), at 125.
 Ed Skoudis, MALWARE FIGHTING MALICIOUS CODE (Prentice Hall, 2004), 36.
 See George E. Andrews, NUMBER THEORY, (Dover, 1994), at 20, 15 ["A positive integer p, other than 1, is said to be a prime if its only positive divisors are 1 and p. ... If a and b (b ≠ 0) are integers, we say b divides a, or b is a divisor of a, if a/b is an integer."]
A computational method, known as Erastosthenes' sieve method, can be used to test whether an integer greater than two can be written as the sum of two primes. See, e.g., A. Granville, J. van de Lune, and H.J.J. te Riele, Checking the Goldbach Conjecture on a Vector Computer, in NUMBER THEORY AND APPLICATIONS (R.A. Mollin, ed.), Kluwer, Dortrecht, 1989, at 423-33.
 A virus' infection mechanism may also have a trigger. See David Harley, Robert Slade, and Urs E. Gattiker, VIRUSES REVEALED (2001), at 7. ["Furthermore, if the virus is at all selective about the circumstances under which it will attempt to infect, the infection mechanism may also incorporate a trigger."]
 See Matt Bishop, INTRODUCTION TO COMPUTER SECURITY, (Pearson Education, 2005), at 4 ["A threat is a potential violation of security. The violation need not actually occur for there to be a threat. The fact that the violation might occur means that those actions that could cause it to occur must be guarded against (or prepared for.)"]
 See George E. Andrews, NUMBER THEORY, (Dover, 1994), at 111. The fame of the Goldbach Conjecture has even inspired a novel, namely UNCLE PETROS AND GOLDBACH'S CONJECTURE, by Apostolos Doxiadis (Bloomsbury USA, 2000).
 See Arturo Sangalli, THE IMPORTANCE OF BEING FUZZY AND OTHER INSIGHTS FROM THE BORDER BETWEEN MATH AND COMPUTERS, at 80 ["[N]o one has yet found an even number that is not the sum of two primes; but nor has anyone demonstrated that such a number cannot exist, so the question is still unsettled."] See, also, John Derbyshire, PRIME OBSESSION (Joseph Henry Press, 2003), at 90 ["Twenty six decades of effort by some of the best minds on the planet have failed to prove or disprove this simple assertion."]
  USSC 117; 497 U.S. 1 (1990).
  USSC 117; 497 U.S. 1, 19 (1990).
 See Lewis v. Aluminum Co. of America, 588 So2d 167, 170 (La App 1991); Nehrenz v. Dunn, 593 So2d 915 (La App 1992). See generally Karen Manfield, Comment: Imposing Liability on Drug Testing Laboratories for "False Positives": Getting Around Privity, 64 U. CHI. L. REV. 287, 299-302, n. 32; Note: Employee Drug-Testing Legislation: Redrawing the Battlelines in the War on Drugs, 39 STAN. L. REV. 1453, 1458-59 (1987). [Discussing settlement of negligence action against testing laboratory by two job applicants whose applications were rejected due to false positive tests for marijuana.]
 See Willis v. Roche Biomedical Laboratories, Inc.,  USCA5 1779; 61 F3d 313 (5th Cir 1995); Houston Belt & Terminal Ry. v. Wherry, 548 S.W.2d 743 (Tex. Civ. App. 1976), appeal dismissed, 434 U.S. 962 (1977).
 See e.g. R.J. and P.J. v. Humana of Florida, Inc., 652 So.2d 360 (1995). [Petitioner, who was misdiagnosed as having Human Immunodeficiency (HIV), filed suit against the medical facility, laboratory and physician responsible for the misdiagnosis, claiming physical and mental anguish. The Florida Supreme Court recognized that a negligent misdiagnosis that results in physical injuries, e.g. from unncecessary treatment, may be recoverable in tort. The Court held that in this particular case petitioner's alleged injuries were insufficient to satisfy the so-called "impact rule," which limits recovery for negligent infliction of emotional distress to physical injuries.]