In my last blog post, I discussed the importance of offensive security research. Given the state of security today, I don’t think I need to convince anyone on the importance of defensive security research. Instead, in this post, I will discuss how (systems) security papers are evaluated today and how that process can be improved. To do so, for humorous effect only, I will exaggerate and vilify a hypothetical security paper reviewer: the all-powerful reviewer, Rev. B who doesn’t believe a whole lot in positive reviewing and doesn’t have time for nonsense (see Figure). Rev. B may also think that too many weak security papers are published. The article presents some counterpoints to Rev. B’s views.
Rev. B’s view: Defenses must be a response to known or popular attacks.
Reality: Indeed, there is some validity to the argument that if a paper is solving an unknown or unpublished attack then the defenses may end up being too strong, i.e., more expensive than they need to be, or focus on aspects of the systems that attackers cannot exploit for operational reasons. That said, if we do not accept works with theoretical or unpublished attacks we will never break out of reactive security mindset.
Particularly in our field where we have to prototype defenses in Silicon — which currently takes years — it is important and useful to support futuristic and realistic attacks. Papers that ask “what if X is compromised”, and provide defenses are vitally important even if X has not been shown to be compromised. As long as there is a reasonable, realistic chance of X being compromised we should encourage these papers. Authors of proactive defenses may help reviewers by clearly defining their concrete threat model, and reviewers can examine the viability of the threat model based on their experience and expertise.
Rev. B’s view: Defenses must aim to counter the strongest attackers. Anything less is useless.
Reality: There is a tendency to think that the only defenses worth pursuing are those that offer protection against the strongest attackers: for instance, attackers that are computationally unbounded, Byzantine and adaptive, or at least the state-of-the-art, cutting-edge attackers.
Attackers, however, come in all flavors with a huge variation in the amount of resources they have, their motivations and technical skills, and the time they are willing to invest into breaking systems. On one end of the spectrum, attackers can be script kiddies/copy cats that download and run others programs, or at the other end, they can be nation states that have large budgets and combine many forms of intelligence to achieve their goals.
Given the diversity in attackers there is still value in deterring a subset of attackers even if those attackers are not the most powerful attackers. This is just like how when we build houses, we design them to protect against petty thieves but don’t (always) build them to survive a nuclear attack/holocaust. To say this differently, if the cost of the defense is small compared to cost to the attacker then designing for weaker attackers is a very reasonable engineering choice.
Further, the mere existence of an advanced attack does not mean that only that attack technique will be deployed by attackers, and older attack techniques fall by the wayside. Attack engineering is at least as difficult as regular software engineering. Thus, the lower-end, simpler attacks still persist because advanced attacks often require sophisticated engineering and orchestration which can make attacks brittle. The only exception when advanced attacks invalidate prior defenses is if the advanced attack can be automated and shown to work on large class of systems.
Rev. B’s View: Security is a binary property: it exists or it doesn’t. There is no point in publishing a defense with a weakness.
Reality: Practically speaking, security is a unary property: no system is secure! The aim of most defensive papers is simply to make it harder for attackers to attack the system. This goal of raising the bar is not too different from traditional computer systems goals for energy-efficiency or performance. For instance, when evaluating a technique for energy efficiency improvements, we don’t ask how the improvement compares to the theoretical best energy efficiency but only how it compares to state-of-the-art. It is really important to have a similar mindset when evaluating security papers. The main question when evaluating this paper is how it is better than state-of-the-art defenses under the same model.
Rev. B’s View: Defenses should be formally specified and have mechanical proofs of security.
Reality: Of course, this should be done when there is a real benefit. In the context of systems security: (1) formal specifications/mechanical proofs of security require a large number of axiomatic assumptions many of which may not hold in reality, (2) logics may not exist to describe the complex systems in a realistic or cost-effective manner to offer practical new insights/benefits compared to more traditional methods for reasoning, (3) while it is true that there have been some very expensive failures despite traditional testing, it is also true that there are far more cases when traditional testing has been successful at avoiding dangers, and finally (4) natural language descriptions, while ambiguous, are more accessible (to attackers and defenders) and appear to be the DSL of choice for human neural networks and have worked reasonably well for several thousand years.
All this said, formalizing axioms and proving security properties forces one to think about the same system in a different way that may expose hidden problems. As we make progress towards reducing the cost of specifying systems formally, training engineers to think formally, and make advances in scaling proof methods and expand their applicability, there will be substantial benefits of formal specifications and proofs.
Rev. B’s View: Solutions that have false positives are unacceptable in the real world.
Reality: Solutions with false positives (and false negatives) do work in reality: credit card fraud detection and email spam detection are two well-known examples. It is possible to filter out false positives by combining them with other techniques (ensemble learning) or by combining results from large number of installations.
Rev. B’s View: Heuristic defenses have short shelf life and will be broken within the next conference cycle. There is little value in accepting these papers.
Reality: Even if the defenses have short shelf life, many such “broken” defenses may be combined in ways to defeat adversaries.
Before I end, it is very important to say that the health of the community is better than what the caricature of Rev B. may suggest. Based on my experience at nearly twenty top computer architecture and computer security program committees spanning over ten years, I can say with confidence that Rev. B does not exist in totality. However, occasionally one can see shades of Rev. B in conference reviewers. This article, I hope, will help not only provide alternate viewpoints to fight Rev. B tendencies in us and also provide some insights into how defensive security papers are reviewed.
About the Author: Simha Sethumadhavan is on the faculty at Columbia University. His work is in Computer Architecture, Computer Security, and how architecture can be used to improve security. His website is http://www.cs.columbia.edu/~simha, and he is @thesimha on Twitter.
Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.