I recently went through old files in preparation for the Turing Award Lecture on June 4, and discovered a paper that was rejected by IEEE Computer yet was a stepping stone to Reduced Instruction Set Computers: “The Case for a New Control Store for the Next Generation of VLSI Systems.”
Before RISC, most computers used microprogramming for control, which was essentially an interpreter of the instruction set written in a low-level assembly language and kept in a fast memory inside the processor. (Some younger architects may be unaware that the SIGMICRO conference was originally the Annual Workshop for Microprogramming.) A microinstruction might be 50 to 100 bits wide, with each bit corresponding to one control wire, and a microprogram might contain 1K to 4K microinstructions.
My PhD dissertation was about formally verifying microprograms, and included a new higher level language and compiler to make it easier to write microcode. I was invited to spend a sabbatical from Berkeley in the Fall of 1979 at Digital Equipment Corporation to help with the microcode bugs in the newly announced VAX minicomputers. I came away impressed just how difficult it was to debug the microcode of an instruction set as complicated as the VAX.
When I returned, I wrote the paper that said if the microprocessor manufacturers follow the trend of the minicomputer companies to make more complex instruction sets with much bigger microprograms, then there would have to be a way to repair the microprocessors in the field. My proposal was to have ROM for the small number of frequently used instructions plus a small amount of RAM act as a cache for the rest of the microcode that would be in main memory along with the program.
The paper was rejected, which is hard to swallow when you’re an assistant professor. My recollection is that the reviewers said that it was a stupid way to build microprocessors. So I was faced two conflicting thoughts:
- More complex instruction sets would likely lead to microcode bugs that would either slow development or need to be repaired in the field.
- Slow development and repairable microprocessors are bad ideas.
Two months later I taught advanced graduate course that investigated appropriate architectures for microprocessor. Not only did we end up not following conventional wisdom of more complicated instruction sets to close the “semantic gap” between high-level languages and hardware, we investigated going in the other direction: make the instruction set as simple (but not as wide) as microinstructions, so that you don’t need an interpreter, and can instead repurpose that fast memory inside the processor to be an instruction cache. C compilers would generate these simple microcode-like instructions, which is similar to what I did in my dissertation but using a standard programming language instead of an ad hoc one invented for microprogramming.
After the class was over and we had promising early results, I wrote “The Case for the Reduced Instruction Set Computer” with my student Dave Ditzel. I sent a draft to some DEC friends from my sabbatical for comment, which inspired Doug Clark and Bill Strecker to write a rebuttal. The companion papers appeared in September 1980—six months after I wrote “The Case for a New Control Store for the Next Generation of VLSI Systems”—in Computer Architecture News, the forerunner of this blog. These opposing papers launched the RISC-CISC “war” that drove the instruction set debates for the next several years.
DEC’s later solution was to subset the full VAX instruction set to reduce the size and complexity of the microprogram in the microVAX, trapping to software to execute the unimplemented instructions. As the 80×86 instruction set grew over the decades, AMD and Intel eventually included a mechanism to repair the microcode using some RAM memory on chip.
In retrospect, I’d like to thank the reviewers for rejecting the control store paper, as that led to a much better set of ideas. Quoting Mick Jagger of the Rolling Stones:
“You can’t always get what you want, but if you try sometime you find you get what you need.”
About the Author: David Patterson is a Professor of the Graduate School in Computer Science at UC Berkeley and a Distinguished Engineer at Google.
Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.