Computer Architecture Today

Informing the broad computing community about current activities, advances and future directions in computer architecture.

In a recent opinion post on security disclosures, Uht questions if the public disclosure of hardware security vulnerabilities has had any benefits, and suggests that it would be better not to disclose these vulnerabilities. As Uht points out, debate on security disclosures is hardly a new topic, and the current consensus is towards disclosure for some very good reasons.  In this article, we reiterate the case for public disclosure of vulnerabilities.

  • Public disclosure is needed to keep vendors honest and investing in security.  Companies selling software, hardware, medicine, cigarettes, soda pop, or products of any other kind all have strong incentives to downplay the risks associated with their products.  Refuting these claims is essential to give customers good information. Uht raises a concern that public discourse about vulnerabilities can “ratchet up the public’s anxieties”, but customer pressure (i.e. “anxieties”) is the main reason why technology developers invest in security (since they’re generally not held liable for bugs).  Eliminating customers’ access to factual information about past security failings in products would result in worse products.
  • Public disclosure makes the world safer and white hatters do not “aid and abet” thieves. Uht writes: “is our view that this is not the best thing to do since it effectively broadcasts weaknesses, and thus aids and abets black hat hackers as to the best ways to compromise systems.” This argument supposes that “Black Hatters” will not be able to identify vulnerabilities themselves without help or if they do so they will humanely use their exploits. This is often not true. In fact, just last week, we heard that  0-day exploits were (again) used indiscriminately by lawless/unethical nation-states to jeopardize lives and safety of individuals . It is often better for honest researchers to disclose than to be surprised by exploits in the wild.
  • Public disclosure is needed for (security) research and (secure) development.  For interesting bugs, the initial report often motivates more research. Spectre, which Uht cites repeatedly, is a great example.  While the embargo provided some time for emergency mitigation work, the main research effort on fixes will take a long time [CACM]. Uht suggests that even the knowledge of vulnerability — not how it can be exploited — is dangerous to reveal ( “Just knowing that something can be done is enough to drive others to successful re-invention.”)  However, knowing something is possible does not mean it automatically leads to its (re)invention: for instance, we know the brain exists and can do wonderful computations but we haven’t been able to (reverse) engineer it! We know that human missions to outer space are possible yet only 12 humans have ever stepped foot on the moon. Finding these important vulnerabilities, and showing how they can be exploited is a collaborative, creative act requiring significant effort and specialized knowledge. Public disclosure is a way to enable such activities: keeping these issues silent will not create critical mass or funding necessary for progress on this important topic.
  • Embargoes before public disclosure offer a reasonable solution: We note that embargoes are a trade-off between giving vendors a head start to develop fixes (good) against leaving customers unprotected against a vulnerability that adversaries may find independently or steal from someone involved in the response.  Further, once a patch is distributed by a vendor, there is often little benefit to keeping the disclosure a secret, since patches can be reverse engineered to identify the issue and fixes.
  • Public disclosure and fixes should not be determined by numerousness of the exploit. Security is a full-system property and hardware is the lowest layer in the system stack, and any vulnerability in there can be exploited with catastrophic effect because it undermines fundamental abstractions used to build systems. For the same reason, it would not be sound to rely on traditional AV metrics that Uht cites such as sample files seen in the wild: it is possible that virus scanners entirely miss malware that uses hardware vulnerabilities. The reason for this is simple. Before the public disclosure, signature-based AV systems would not have known to look for these vulnerabilities, and after the disclosure we have come to realize that AV products can be compromised by these hardware vulnerabilities bringing into question their effectiveness. Putting aside the lack of support, it is unclear if it is ethical for a company or engineers to punt protecting against a known vulnerability just because it has not been seen widely in the real-world (see “https://www.acm.org/code-of-ethics” Section 2.9).

As we mentioned before, the role of disclosure has been debated several times and the consensus is towards disclosure. Here we provide some historic context to this debate.

In the article “Rudimentary Treatise on the Construction of Locks“, published in 1853. Charles Tomlinson opines in the article that “If a lock — let is have been made in whatever country, or by whatever maker — is not so inviolable as it has hiterto been deemed to be, surely it in the interest of honest persons to know this fact, because the dishonest are tolerably certain to be the first to apply the knowledge practically; and the spread of knowledge is necessary to give fair play to those who might suffer by ignorance”.  It has been long recognized that disclosure by good guys is always better than discovery and use by the bad guys.

In the early 2000s, there was debate if scientific journals should publish articles on bioterrorism: a specific article discussed the amount of toxin that must be added to the milk supply and at what points in the supply chain to maximize casualties. While this is not the same as the disclosure of vulnerabilities, the resolution here offers some lessons. When this paper was submitted for publication there was concern that this information, if published, could aid terrorists. The article was published in the PNAS journal eventually, and Alberts, the then president of PNAS made the case for publication despite the sensitive nature of the topic: “It is important to recognize that publishing terrorism-related analysis in the open scientific literature can make the nation safer […] science can make many important contributions to the design of our defenses. ”

Overall, we urge the research community to keep publishing disclosures to force companies to make more secure products. If someone needs to  be stopped it is the bad guys or the companies that are creating products with insecurities embedded in them.

About the Authors: Simha Sethumadhavan is an associate professor of Computer Science at Columbia University. His research interests are in computer architecture and computer security. Steven M. Bellovin is a professor of Computer Science at Columbia University and his research interests are in computer security, privacy and law. He is the author of the book Thinking Security. Paul Kocher is a long time independent security researcher. Ed Suh is a Professor of Electrical and Computer Engineering at Cornell University and his research interests are in computer architecture and security.

Image source attribution Alpha Stock Images – http://alphastockimages.com/

Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.