Groundhog Day: A Personal Perspective on Multi-core Computing

In the 1993 comedy “Groundhog Day”, Bill Murray finds himself reliving the same (eponymous) day again and again until he mends his ways and becomes a better person.

Nearly twenty years ago, when I entered graduate school, parallel computing was the hot topic, but for loop-y, Fortran-based scientific computing. There was a renaissance in supercomputer architectures, spanning both commercial and research projects. At the same time, there was an explosion in work on parallel programming models. By the time I graduated in the post-cold war era, supercomputing companies were going out of business and I was looking for something else to do.

(Being application oriented, I decided to move into biotech and worked in applying 3D object recognition techniques to drug design. It is fascinating work, bridging A.I.-ish pattern recognition techniques and real “wet” science.)

Within a few years, a grad school buddy and I decided that we could make parallel computing work in consumer electronics and co-founded a fab-less chip company building parallel DSPs for cameras and printers. The idea was to replace fixed-function ASICs with programmable hardware. We built a prototype, designed a programming stack and were deep into customer engagements until the rest of the tech industry slumped on the heels of the dot-com bust. That ended that.

After joining Intel, I spent some time working on memory hierarchy design issues until, once again, parallelism started to emerge again in the guise of dual-core, quad-core, and, now, Tera-scale systems.

This time, however, things are different. While Moore’s Law scaling continues, power scaling has slowed, meaning that the power/performance ratio is a first-order design consideration. Given this reality, going multi-core in the mainstream makes sense for the long haul.

But, there’s a catch: If we want Groundhog Day to end, we have to make programming these devices accessible to the average programmer.

This is an exciting time for language and tool developers because of our role in translating multi-core and tera-scale performance to opportunities for application developers. Because we “live” at the nexus of rapidly evolving hardware and software technologies, we have to infuse our efforts in the relatively obscure arts of parallel programming and compiler design with a healthy dose of application and micro-architecture expertise.

In my coming blog entries, I’ll look forward to discussing how we’re working to make this happen.

Comments are closed.