Computer Architecture Today

Informing the broad computing community about current activities, advances and future directions in computer architecture.

Security vulnerabilities that can be exploited through malware have been a concern for computer systems since the development of the first computer worm. Recently, several highly impactful attacks reported in the news (e.g., WannaCry ransomware, IoT-based botnet DDoS attack on DNS) have reminded us of the importance of developing, deploying, and maintaining effective defense mechanisms in computer systems. In this post, I argue that it is time to consider designing and operating computer systems with an “off-by-default” attitude to proactively defend against such attacks.

Today’s reactive defenses

A typical attack exploits vulnerabilities in the software running on a computer system to introduce its own snippet of code, which then makes the system execute the attack. Such attack code can be introduced into a system via the Internet, USB ports, and similar I/O operations. Defense mechanisms often monitor these interfaces to identify potential attack code. (There are also attacks that exploit vulnerabilities in the design of programs, network protocols, etc. such that they trigger malicious behavior even though the software is executed faithfully. Such attacks require other defense mechanisms, or preferably, better underlying design, implementation, and verification mechanisms.)

Today’s most widely used defense mechanisms rely on searching for signatures (or heuristics or behaviors) of known attacks. An example is the malware scanner software on a desktop computer that searches for patterns that match known attacks. Similarly, content-inspection firewalls and/or network intrusion detection systems monitor network traffic for similar patterns to avoid that malicious network traffic can reach end-systems.

When using signatures to identify malicious code, network traffic, or I/O operations, attack detection is reactive. The attack needs to be known and a signature needs to have been identified before the defense is effective. For zero-day attacks, this defense mechanism is not effective since no previous attack that targets a specific vulnerability has been seen.

In addition, a significant challenge for security through monitoring is the need to dedicate significant amounts of system resource to searching for attack signatures. This approach relies on environments where system resources are abundant and I/O operations are limited (compared to the computing power of the system). Looking at the increase of embedded applications in the future, there are many computing systems that do not meet this criterion: embedded systems do not have the compute power (and often not the necessary energy resources) to run malware detection software; and systems with focus on I/O operations need to scan a disproportionately large amount of data. Thus, it is difficult to see how such defense mechanisms are suitable for these environments.

Proactive security

An alternative view on security and possible defenses is based on the observation that the “root of all evil” is the ability to run new, arbitrary code (e.g., unintentionally introduced malware). The existence of such a path to accessing the processor enables attackers to hijack the operations of the system and make it behave in unintentional ways. By taking an off-by-default attitude (i.e., disallowing execution unless it is explicitly permitted) can prevent such attacks.

Are there any computer systems that do not allow or limit the execution of new code? While such a premise may seem counterintuitive at first glance, there are widely used examples of such systems: custom logic circuits, either hard coded or programmed with verified bitstream. These devices implement a functionality that cannot be changed (unless they contain software-programmable components). A slightly less restrictive version of such systems is a processor with hardware monitors that track program execution and compare operation with predefined, verified patterns. Also, there are security-focused, hardened operating systems that reduce the attack surface by limiting functionality. Finally, there are walled gardens that enable control over what applications are allowed in an environment.

These proactive security mechanisms are effective even against zero-day attacks since they limit what a system can do. In all cases, the execution of arbitrary code (such as an attack) is either not possible (custom logic) or requires additional, conscious efforts to enable (hardware monitor). Hardened or walled operating systems also limit the set of applications that can be executed in the first place. These limitations significantly reduce the opportunities afforded to attackers and thus present more secure environments.

Freedom

“But wait!”, you may say, “Doesn’t this mean that we have to give up on our freedom to quickly and easily run arbitrary code on a system? How could we live in such a world?”

It is true that today’s computer systems are designed to make it simple to execute new code. However, most users do not write their own programs or scripts. Most users simply use a selection of applications that meet their needs. For these users, a freedom of selection among applications may present a suitable balance between the freedom of choice and practical security needs.

Also, note that using proactive security does not imply an environment where a single entity (such as a government organization) has a monopoly on deciding who can run what code. Similar to how we already have multiple trusted certificate authorities installed as default in our operating system, we could have multiple entities that can provide certificates for code that gets executed. Thus, users are not limited to a sole provider (or authorizer) of what verified code can be run on a system.

A lesson from the Internet

Taking a proactive security approach in all of our critical infrastructure would require a fundamental shift in how we manage and operate computer systems. This may be difficult and complex to achieve in practice. Just to argue that such fundamental shifts are indeed possible, consider computer networks.

The fundamental design of the Internet assumed that any end-system can send network traffic to any other end-system (“on-by-default”). This approach seems suitable during development and academic use of this platform, but less so when global, commercial user became the norm. With the recent introduction of software-defined networking (SDN), the fully open, distributed network has moved back to a more centrally controlled mode of operation: In SDN, any connection through the network needs to be explicitly set up (and thus authorized) by the SDN controller. New connections are forwarded to the SDN controller for approval (and to determine a path through the network). Thus, the SDN controller has become the central authority (at least central to the network where SDN is deployed) to decide what traffic can go through the network and along which path. This off-by-default operation is far from the original design of the Internet. Yet, data center operators and network providers are enthusiastically deploying software-defined networks because of the added control provided through this mode of operation.

Where do we go?

Global interconnectedness of computer systems and interactions with valuable information and physical resources have given rise to attacks. In such an environment, it may be time to shift away from the philosophy of allowing the execution of arbitrary code to an off-by-default attitude that disallows code execution unless explicitly permitted. The cost of this change is the added complexity to enable new code that is introduced into a system. However, limiting the functionality of computer systems to well-known operations that can be validated is a more effective defense against known and unknown attacks than today’s reactive approaches. What we need to decide as a community is if giving up this freedom (or is it just convenience?) is worth the gain in security.

Tilman Wolf is Senior Associate Dean and Professor of Electrical and Computer Engineering at the University of Massachusetts Amherst.

Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.