About a year ago, Intel and Microsoft each invested $10M in jointly funding Universal Parallel Computing Research Centers at UC Berkeley and U of Illinois to make parallel programming mainstream in future client software. I’ve had the pleasure of attending updates where each reported on their first year’s efforts. Clay Breshear’s blog here has a good overview of the UIUC Summit content.One point made by both was, as the UIUC whitepaper puts it: “Hardware must be used for programmability.” For example, a UCB talk gave a plea for better performance counters and a UIUC one proposed changing the micro-architecture extensively to give programmers simpler, more easily understood parallel memory semantics. Unfortunately, even the simplest HW modifications to increase programmability face big challenges of product costs and legacy compatibility. For example, useful as they are, performance counters are a tough sell to hard-nosed product managers. Many are intimately connected to the ‘guts’ of the processor and so are very intrusive to the design and present a big challenge to validation. That means a significant investment is required for the design and validation efforts for something that doesn’t have the direct end-user benefit of performance and other new enhancements. Of course, their use in silicon debug helps, getting the product out the door is of unquestionable value, but for that purpose not every counter has to work flawlessly, and model-specific instrumentation is fine. The basic problem is that the customer for programmability features is not the end-user but the programmer and as one product planner facetiously commented to me: “Programmers aren’t a big market segment.” Extensive enabling of ISVs can be expensive but still more cost-effective than burdening all of 100s millions of processors shipped with the cost of features to enhance programmability. Even so, the tuning and debug support they make possible is well recognized. We’ve continually added to and improved the performance counters since they first appeared in the original Pentium™. Architectural performance monitoring, with its commitment for consistency across micro-architectures, has appeared with Intel® Core Solo™ and Intel Core Duo™ processors. Programmability has never been more of a concern than with today’s transition to multi-core and the need to make parallel programming mainstream. Programs that transparently scale to increasing numbers of cores are critical if multi-core is going to give ISVs the same performance progression that we enjoyed from scaling clock frequency. Lowering the bar to concurrent programs can help make more existing as well as emerging high performance applications available sooner. So, there is clear motivation to continue improvements in counters and debug features that address parallelism will continue. But, the best way to accelerate the addition of programmability features is dual-use HW that helps at development and run-time. Some examples might be: instrumentation (performance counters) that are also needed by SW to provide outstanding QoS (quality-of-service) for multimedia, the use of replay mechanisms both for debugging and for resiliency, or ISA extensions that enable simple programming models to run faster. What are the ones that will really deliver value? That’s the $10M question.
Connect With Us
Tags#IntelR&Dday @idf08 Big Data Cloud Computing Ct CTO energy efficient Future Lab Future Lab Radio HPC IDF IDF2008 IDF 2010 Immersive Connected Experiences innovation Intel Intel Labs Intel Labs Europe Intel Research ISSCC Justin Rattner many core microprocessor mobility multi-core parallel computing parallel programming radio Rattner ray tracing research Research@Intel Research At Intel Day Robotics security silicon silicon photonics software development Stanford technology terascale virtual worlds Wi-Fi WiMAX wireless