All the major CPU manufacturers have thrown their lot in with multi-core designs. The (multi-billion dollar) question now is how to program these devices. I can tell you with some confidence that we don’t yet know what the answer will be in 10 years. I can’t imagine that any single company can reliably solve this problem…and I think the Open Source community is essential to finding the answer. The main reason lies in the relatively unexplored territory of how multi-core programming models interact. If I’m preaching to the choir (though not in a Cathedral…see below), feel free to skip the rest of this. However, if you’re still unconvinced, read on. Admittedly, much of this argument is not new, but I think the challenges of multi-core programming create a greater imperative.In today’s parallel programming models, we have a variety of approaches that work now but they all have shortcomings and limitations. This isn’t so much an intrinsic problem in these languages or tools, in most cases, but a shortcoming in their implementation. Rather, it was a shortcoming in our vision; for the most part, as we invented these models, they weren’t envisioned or implemented to work together. Getting them to work together isn’t trivial, but is do-able in most cases. (For example, we’ll often find that the underlying threading runtimes weren’t designed well to play together with others, but this can be fixed.) The real problem is that of these many choices, some will need to be mutated and many combinations will need to be tried. These models can and will be combined in thousands of interesting ways, with many different semantic implications. Each of these efforts will be risky, all being more likely to fail than succeed on the way to perfecting the model(s) and language(s) that will ultimately be used for large-scale parallel programming. Though we take risks at big companies, they are fairly risk-averse for the most part. Moreover, we tend to try to leverage our existing investments in development as much as possible. This means that a fatally flawed bet (product) is not likely to be readily tossed out as sound technical “natural selection” would require. The experimental substrate for this evolutionary churn must be real applications, but again, we run into the risks that any (sensible) large software company must be aware of. When developing new major version of products, it is highly unlikely that the code base is completely rewritten or even significantly turned over. Estimates vary, but let’s assume that major version revisions change (often much) less than 30% of the source base. Given this, how likely is it that a major, risk-averse software developer would rewrite substantial portions (>50%) of an important application to use a combination of parallel programming models? Especially when the initial value of parallel programming (increased performance, versus longer term feature differentiation) is of limited value to the typical application? How about several such models that have never been used together? This is the great challenge facing us and it is a daunting one. For example, in the research labs, we develop a pretty wide range of multi-core related programming technologies around data parallelism, implicit parallelism, functional programming languages, transactional memory, and speculative multithreading. We have barely begun to think about how these different models interact (we’re starting with the Pillar project). So what is the answer? I have a strong intuition that the answer lies in the open source community, with it’s iconoclastic brilliance, unabashed bravado, fearless experimentation, enormous energy and (growing) size, and commitment to quality software development. The open source community may well be the only place where parallel programming constructs, models, libraries and compilers can be deconstructed and recombined at the scale and pace required in the coming years (see The Cathedral and the Bazaar). For recent evidence of this, look at the amazing pace of innovation in web application frameworks (Ruby on Rails is a favorite example). Does this mean we’re abandoning differentiation in our bread-and-butter products? Hardly. There are so many other components of a platform on which companies can differentiate and compete. For chip companies, we ultimately live and die by leading with our architecture and manufacturing technologies. Programming tools are critical to delivering the value to programmers, but they are limited to the extent that access is limited.
Connect With Us
- Qingfeng Zhu on The Third Eye View
- Anil on The Third Eye View
- Olajfestmény on Intel and Stanford Researchers Reveal Peptide Chip Details to Categorize Diseases and Analyze Protein Interactions
- Tony Rivers on Intel and Stanford Researchers Reveal Peptide Chip Details to Categorize Diseases and Analyze Protein Interactions
- Neel on Our ISTC-VC will rock at SIGGRAPH 2012
Tags#IntelR&Dday 80-core @idf08 Big Data Cloud Computing Ct CTO energy efficient Future Lab Future Lab Radio IDF IDF2008 IDF 2010 Immersive Connected Experiences innovation Intel Intel Labs Intel Labs Europe Intel Research ISSCC Justin Rattner many core microprocessor mobility multi-core parallel computing parallel programming radio Rattner ray tracing research Research@Intel Research At Intel Day Robotics security silicon silicon photonics software development Stanford technology terascale virtual worlds Wi-Fi WiMAX wireless