As computer architects, we use a number of principles for building robust computer systems. Some of these principles have been around for so long we often take them for granted. In this blog post, through the lens of pressing problems of today, we’ll revisit one such principle and discuss if it may have applications beyond computer system design like news and transportation.
But first, a quiz to get you thinking: Do you know the first computer that had clocking? And do you know what necessitated clocking? (Answers at the end of the post).
While you think about that, let’s talk about fake news, and a possible remedy.
You can barely navigate the web without being inundated by information from troll farms or authoritative biased sources that intentionally obfuscate to misinform. Today’s immediate need for information has put a strain on the media to break news on a second-by-second basis. Their desire to outmatch each other often leads to news being published without proper verification. In other words, the asynchronous generation, distribution and consumption of news is like having an out-of-order processor that not only performs speculative execution, but also commits instructions speculatively!
The solution may seem simple: all journalists should fact check before they publish. If only it were that easy. Unfortunately, the incentives are not setup to fact check. Actually, it is quite the opposite; bringing exciting news faster and using clickbait titles to lure more users for advertisers’ benefits is what’s rewarded. A mechanism should exist to level the playing field where everyone has enough time to fact check before publishing.
One possible solution: disseminate news on social media only every N hours. This can help give organizations time to verify the floodgates of information, potentially improving the signal-to-noise ratio of quality reporting. Undoubtedly, this may bring up concerns about the efficiency of reporting. However, the accuracy vs timeliness of a story depends on its content: weather and traffic can be served closer to real-time, while more pressing stories can be released at appropriate periodic schedules.
Can you identify the computer architecture principle used here? (Answers at the end of the post).
Next, let us consider a different problem that will undoubtedly grow in significance in the next few years: autonomous vehicles.
In general, autonomous vehicles do more good than bad: they reduce accidents substantially and are more comfortable. An underappreciated problem with autonomous vehicles, however, is that they introduce systemic failures. For instance, malware, a worm, or even a bad software update may cause a large number of vehicles to be affected all at once. In contrast, today, when a vehicle breaks down, it is often a random isolated event, and odds are when the check engine light turns on there’s a shoulder to pull over to. What computer architecture principles can help us solve the systemic failure problem?
A significant problem is that autonomous cars have to account for an infinite amount of possibilities in the environment in order to provide a safe and comfortable ride. This is nearly impossible. But what if we can control the environment? What if we consider flipping the notion of safety from the vehicle to the road? What if we can architect roadways like we architect processor pipelines?
Like the stages of a computer processor pipeline, each vehicle could occupy a well defined slot of roadway and take actions, such as changing lanes, on a clock signal. This slots & clocks approach may not only make vehicle coordination simpler to reason about, but could open up the notion of roadway virtualization, likely increasing capacity and improving safety. For example, consider a road without a shoulder: with virtualization, we can reserve special slots that would function as “virtual shoulders”, increasing the safety of a road that may not have originally designed for it in the first place.
Can you identify the closest computer architecture analogue to the ideas previously discussed?
Perhaps one of the most fundamental of computer architecture principles is clocking. The notion of dividing time discretely, and processing events at set periods of time has played an invisible, but important role in the design of digital computer systems to gracefully handle conditions that were unknown at design time. Even the earliest computers, like the ENIAC, had a notion of clock. What originally necessitated the use of clocking all those years ago? Simply put, engineers needed a way to equalize the delays across different circuits. This masterful engineering trick though sub-optimal provides a way to control physical uncertainties. Perhaps applying computer architectural principles such as clocking outside of the digital realm can help us a take a step closer to handling serious challenges faced today.
About the Authors: Simha Sethumadhavan is an associate professor at Columbia University. His website is http://www.cs.columbia.edu/~simha, and his twitter is https://www.twitter.com/thesimha. Miguel Arroyo is a third year PhD student at Columbia University. His research interests are in computer security, computer architecture, and cyber physical systems. His website is https://miguel.arroyo.me.
Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.