Computer Architecture Today

Informing the broad computing community about current activities, advances and future directions in computer architecture.

There are millions of viruses, etc., in the wild today. Countless new ones are devised by black-hat hackers all the time. In order to proactively defend against new exploits, some white-hatters seek out or create weaknesses or vulnerabilities and then devise fixes for them. However, in some cases, such as Spectre, fixes are not readily apparent, either to the inventor or the vendor of the target software or hardware. Regardless of the existence of a fix or not, the question arises as to what to publicize or disclose about the vulnerability. We argue that no public disclosure should be made at all, until and unless the exploit appears in the wild.

The norm today is to fully disclose vulnerabilities, most often following the tenets of responsible disclosure. It is our view that this is not the best thing to do since it effectively broadcasts weaknesses, and thus aids and abets black hat hackers as to the best ways to compromise systems.

With the complexity of current hardware and software systems arising from billions of transistors and millions of lines of code, it is unlikely that any system will ever be bug-free or vulnerability-free. There are effectively an infinite number of unknown vulnerabilities: “Every day, the AV-TEST Institute registers over 350,000 new malicious programs (malware) and potentially unwanted applications (PUA).” What then is the point of actively ‘discovering’ new vulnerabilities and disclosing them? They are effectively being invented and empower black hats to wreak havoc without making systems safer. It is a race to the bottom. At the same time it can unnecessarily ratchet up the public’s anxieties.

Pros and Cons: Many arguments for full disclosure have been made over the years, e.g.: Schneier: Full Disclosure of Security Vulnerabilities a ‘Damned Good Idea’, Hardware Security and references therein, and Reflections on trusting SGX. However, they all seem to miss the basic point: if you don’t want to be blown up, you don’t tell the world how to make and use a bomb. Better yet, don’t even tell the world that such a thing as a ‘bomb’ exists. Just knowing that something can be done is enough to drive others to successful re-invention.

One argument for full disclosure is that companies will not fix vulnerabilities unless they are forced to. However, at the risk of excusing less-than-ideal behavior, looking at the situation from a company’s point-of-view shows that inattention to a fix may be reasonable. There are a plethora of vulnerabilities and bugs that need to be fixed at any given time, and resources are limited, so where should such resources be allocated? Logically, it would be to address the problems having the highest potential for damage, that is to minimize overall risk. Those vulnerabilities presenting the greatest risk are those that are widely known and have large deleterious effects, that is, just those that have been disclosed and widely publicized. If a vulnerability has little affect, no one will care about it and it will not lead the news.

Even with responsible disclosure it may be the case that a fix cannot be made quickly, but the vulnerability inventor decides to fully disclose it anyway, as in the case of Spectre. In this case users will be exposed for possibly a long time, if not permanently. Without an available fix it seems irresponsible to disclose such a vulnerability in any way, even ‘responsibly.’ Such an apparently indefinite delay occurred with Spectre. It was fully disclosed in January, 2018, and it was not until mid-Summer that any kind of effective mitigation that did not severely impact performance was devised, and then not for all processors. A counter-argument can be made that mitigations would not have been devised if there had not been a full disclosure, since potential mitigation-creators would not have known about it; however, such mitigations might still have been too late.

Post Mortem: Was/is there a benefit to the Spectre disclosure? The implementation of actual exploits is sufficiently complex and system-dependent that Spectre has not been widely used (yet); see: There is no evidence in-the-wild malware is using Meltdown or Spectre, Does malware based on Spectre exist?,oo7: Low-overhead Defense against Spectre Attacks via Binary Analysis.  We may not be so lucky the next time. Although hardware micro-architects are now aware that security needs to be a first-class design parameter, now black hatters have another vulnerability dimension to pursue; who knows what they will come up with? The world has been shaken up by the disclosure; was that necessary and helpful?

We can’t always tell what’s going to happen upon a disclosure; doesn’t that mean we should be cautious, play it safe, and thus not disclose? Isn’t that the engineering way of doing things? But it could be said that disclosure IS the safest approach, long term, since microarchitectures will be hardened. But isn’t the short-term risk too great? We want to be able to live to see the long-term. Besides, with possibly billions of affected and unfixable processors in the world, there would continue to be risks in the long-term.

The Bottom Line: It seems like any attribute, hardware or software, can be used to detect and affect information or control-processes, it’s just a matter of detailed ‘discovery’ or invention to figure out how. So let’s not help black hatters speed things up, get there first and really cause trouble. Let’s just keep it to ourselves.

Acknowledgements: Many thanks to Laurette Bradley for comments and edits, Axelle Apvrille for Spectre-related malware information, and Resit Sendag for comments on an earlier draft of the post.

About the Author: Augustus (Gus) K. Uht is a Professor-in-Residence in the College of Engineering at the University of Rhode Island. He received his PhD from Carnegie-Mellon University, and MEE and BS degrees from Cornell University. His areas of research include adaptive systems and instruction level parallelism. He is a licensed Professional Engineer.

Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.