Welcome to my first blog! I’m delighted to be part of the Research@Intel blog. As an Intel researcher, my job involves developing new programming systems for future Intel architectures. I work on a range of technologies spanning programming languages, optimizing compilers, high performance runtime systems, and hardware architecture. I hope to share with you my thoughts and my research on these technologies.Recently, multi-core architectures have made my job a lot more challenging and exciting. That’s because multi-core processors will totally change the way software developers build applications. We’ve hit a power wall. Even though Moore’s law continues to hold, and the number of transistors on a die continues to double roughly every 18 months, power limitations prevent processor architects from using those extra transistors to increase the performance of a single thread (e.g., by using wider out-of-order execution or deeper pipelines and higher clock frequencies). To increase the processor’s raw computing throughput while managing it’s power consumption, architects have turned to using those extra transistors for additional cores on each die. We’ve entered a multi-core world. Multi-core architectures are an inflection point in mainstream software development: they change the way software developers will think and act. Programmers can no longer expect to create single-threaded applications that execute faster with each successive processor generation. To harvest the performance potential of multi-core processors, a programmer has to develop parallel programs — programs that express units of work that can execute in parallel. Instead of writing single-threaded code that will automatically run faster on the next latest-and-greatest processor, you need to write parallel code that can take advantage of the increasing number of cores each successive processor generation provides. The advent of multi-core processors brings parallel processing into the volume computing market, bringing parallel programming with it into the mainstream. This is a huge shift from the way mainstream programmers develop software today. Most programmers simply don’t think of their programs in terms of parallel tasks. Parallel programming brings a new level of complexity to the challenges of programming. It’s hard to extract parallelism, to co-ordinate parallel tasks, and to get parallel speedup. Mainstream languages today weren’t really designed with parallelism in mind; parallelism was bolted on mostly as an afterthought via threading APIs, which are like the “goto’s” of parallel programming. Parallel programming is also notoriously tricky and has its own class of bugs such as livelocks, deadlocks, data races, and lost wakeups. These bugs are particularly insidious because they depend on timing and often manifest only in the field. Besides making your program unreliable, these bugs run the risk of becoming security holes. On top of all this, the requirements of modular software engineering practices compound the challenges of parallel programming. Today’s languages don’t help the programmer compose parallel applications out of existing thread-safe modules in a manner that gives parallel speedup but avoids new bugs. Because of this shift to parallel programming, we need to rethink the whole stack with scalability and parallelism in mind: programming languages should have high-level parallelism constructs that support safe and scalable composition of software modules; tools such as development environments, debuggers, and performance tuners must better support parallel programming; compilers must be redesigned to support concurrency; runtime systems must be scalable and provide the primitives required by parallel languages; and operating systems must be made scalable and provide interfaces to manage parallel hardware resources. I’m excited by this inflection point because it’s an opportunity for innovative research. At our labs in Intel, we’ve been rethinking the whole stack with multi-cores in mind, and we’ve been working on making parallel programming reliable and scalable for mainstream software developers. In my next blog, I will talk about some of the promising technologies that we are working on.
Connect With Us
- s.mcknight on The Third Eye View
- Qingfeng Zhu on The Third Eye View
- Anil on The Third Eye View
- Olajfestmény on Intel and Stanford Researchers Reveal Peptide Chip Details to Categorize Diseases and Analyze Protein Interactions
- Tony Rivers on Intel and Stanford Researchers Reveal Peptide Chip Details to Categorize Diseases and Analyze Protein Interactions
Tags#IntelR&Dday @idf08 Big Data Cloud Computing Ct CTO energy efficient Future Lab Future Lab Radio HPC IDF IDF2008 IDF 2010 Immersive Connected Experiences innovation Intel Intel Labs Intel Labs Europe Intel Research ISSCC Justin Rattner many core microprocessor mobility multi-core parallel computing parallel programming radio Rattner ray tracing research Research@Intel Research At Intel Day Robotics security silicon silicon photonics software development Stanford technology terascale virtual worlds Wi-Fi WiMAX wireless