Computer Architecture Today

Informing the broad computing community about current activities, advances and future directions in computer architecture.

Richard Thaler won the 2017 Nobel prize in Economics for his work on Behavioral Economics. He observed that humans are not “rational’ creatures and that our behavior is impacted by how we react to the world. People have a myriad of biases that influence how we think about problems and he developed models to quantify the impact of behavior on economic theories. In his book “Misbehaving: The Making of Behavioral Economics”, he cites example after example of how human behavior does not follow economic expectations based on rational behavior, resulting in flawed economic models .

Engineers, like economists, deal with large amounts of data and pride ourselves on our clinical ability to analyze and solve complex problems. However, there are many examples where our underlying human nature impedes us from adequately understanding or addressing problems. Humans suffer from Psychological Myopia or the tendency to think short-sightedly. We tend to focus more on things that are closer to us in time and experience. We focus on information that is immediately related to our judgment and ignore other, less prominent, pieces of information. The rest of the article presents some examples of this behavior and how it impacts our field, whether we care to admit it or not.

Present Bias

In Thaler’s book, he uses the term “present bias” to describe how people discount the future and give larger weight or value to decisions that are closer in time. For instance, if given a choice, people are more likely to spend more on a nice vacation now than save money for retirement many years in the future. Computer architects also display present bias in a myriad of ways, both in terms of discounting the future and the past. In industry, generally due to time-to-market restrictions, we tend to think extremely short term with evolutionary design changes rather than riskier revolutionary ideas that have a longer timeline for returns. However, a similar near term bias exists in academia. It is much more difficult to publish truly risky, revolutionary research due to implicit filters in what gets funded and what makes it past a program committee.

Discounting the Past

Present bias also makes us blind to the past. We assume that the way the world is, is the way the world is supposed to be.  Consider the current discussion taking place in our community and in Silicon Valley about increasing gender diversity. One of the premises expounded upon (link) is that engineering/computer science does not appeal to young women and they choose other careers. Hence the lack of women in our field. However, what these articles miss is that historically this was not always the case. According to the article “When Women Stopped Coding“, the percent of women in Computer Science 30 years ago was nearly twice what it is now.

With respect to research, most of us are hyper aware of papers in the last few years, and less aware of what has happened 5, 10, or 30 years in the past. I had a professor in grad school who used to joke that all architecture is reinvented every 5 years. Virtualization, for instance, was being addressed by IBM in the 1960s. Power and cooling were critical research topics in the 1980s before the adoption of CMOS provided a respite for a decade or so before they once again became a primary constraint (link) in the late 1990s. Both virtualization and power burst onto the architecture community seemingly out of nowhere even though there was a clear historical basis and trend for both. Without knowing what problems or solutions were investigated in the distant past, we are bound to either miss opportunities to see problems ahead of time, or reinvent solutions.

As an aside, the lack of historical insight is not limited to our field. Thaler notes that Behavioral Economics, the area in which he won a Nobel prize, was first described over 200 years ago by the father of modern economics, Adam Smith. However, Smith’s observations were either forgotten or not given significant weight until Thaler and others began quantifying behavior in the 1970s.

Myopic Judgment

Another aspect of Psychological Myopia is the focus on details that are immediately related to our area of expertise. In industry, engineers (me included) sometimes attack a problem with the tools they have readily available rather than taking a step back and examining the issue at large. Hardware engineers design and implement solutions in RTL, while software engineers attempt to solve the problem either at the OS or application level. This results in band-aid fixes that do not fully address the issue, are painful to maintain, and generally do not scale through the product lifecycle. Sometimes this is necessitated by time constraints, but not always. The same myopic view clouds us as researchers. I have participated in program committees where we could not address or evaluate a paper because it is outside our areas of expertise. Sometimes the PC chair(s) corral outside experts who can evaluate the paper’s value and relevance. At other times, the PC’s response is to push the paper to another conference. This results in self-selection of topics and ideas, and isolates us from other branches of research.

Theory Induced Blindness

Self-selection leads to a narrowing of the field which has its own set of issues. However, without new insights and fresh viewpoints from other fields and researchers, we end up with what Daniel Kahneman, another Nobel winner in economics, refers to as Theory Induced Blindness: “Once you have accepted a theory, it is extraordinarily difficult to notice its flaws”. Unfortunately, we are experiencing the effects of theory induced blindness about speculative execution with the discovery of the Meltdown and Spectre vulnerabilities.

Architects were always aware of the impact speculative execution has on the performance of microarchitectural structures from branch predictors to caches. I recall heated discussions 20+ years ago on whether speculative data in caches improves or degrades cache performance. However, we did not adequately consider speculative execution’s impact on security. There were a smattering of security papers in architecture conferences that covered the basic building blocks, such as cache side channel attacks, necessary to expose these vulnerabilities, but they were either too old or too intermittent to significantly influence the community at large. Once we accepted speculative execution as the natural mechanism to improve performance, we did not consider any flaws it exposes. We believed existing hardware and OS protocols protected the processor. However, within the course of 6 months, four different groups independently developed techniques to exploit the obvious vulnerabilities resulting from speculative execution (link). What is remarkable is not that these groups were able to find the vulnerabilities but that the vulnerabilities stayed hidden for over two decades. As Ander Fogh, one of the people who drove this research, notes, given the complexity of processors, people will find other vulnerabilities (link).

Call to Action

As companies scramble to fix these security holes, the question that must be asked is why did we not consider these and other issues earlier, and what can we do to change how we approach architecture so that we are not caught by surprise once again. In retrospect, the vulnerabilities exposed by Spectre and Meltdown are obvious, but hundreds of highly skilled engineers, professors and students were blind to them for decades. I expect a plethora of new research around security and vulnerability in future conferences, but what about the next problem that we should be evaluating now but do not have the insight to do so? Should we consider more interdisciplinary conferences such as ASPLOS, or give fuller weight to ideas that are more speculative and risky (no pun intended)? Should we re-examine historical papers from decades past in the hopes of seeing problems and solutions in new light? How do we push ourselves out of our comfort zone(s) and examine the expanding role of architecture as it intersects other disciplines? How do we put processes in place so that we can move past our inherent psychological myopia?

About the author: Dr. Srilatha (Bobbie) Manne has worked in the computer industry for over two decades in both industrial labs and product teams. She is currently a Principal Hardware Architect at Cavium, working on ARM server processors.

Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.