I can’t help but feel the excitement and optimism that accompanies the launching of a bold new venture which will involve nearly 90 talented researchers focused on parallel computing. We’ve got great partners in Microsoft, Berkeley, and Illinois, an exciting technical focus, and a commitment to create fundamental breakthroughs in parallel computing – applications, software, and hardware. The awareness that we’re attacking a critical problem for the entire computing only fuels the adrenaline! Why are we doing this?It’s because we’re back at the beginning of Moore’s Law, once again. Well not precisely, but at least metaphorically. When Gordon Moore penned his now famous tome [Moore65], the advanced state of integrated circuits had reached the incredible level of 50 transistors per chip, and predicted that by 1975, integrated circuits could have 65,000 devices – a target we know was far exceeded by the industry. Today, the industry delivers billions of transistors per chip. Yet, in that moment, farsighted technical leaders understood that complex semiconductor chips (whole processors and even systems) would be possible to manufacture at low-cost and high reliability. To achieve that goal, industry has delivered a whole series of breakthrough technologies to bring us from hand drawn devices to high-level synthesis, from simple testing to complex validation and formal verification, from simple to extraordinarily complex processes (# layers, lithography, exotic materials). Now focus on that long-term opportunity didn’t distract the industry from near-term needs, and we have a broad array of internal investments and programs which support programming of multi-core systems in the near term. The UPCRC program is focused on the long-term opportunity. In a real sense, software applications have been the direct beneficiary of Moore’s law, as the advent of giga-ops and gigabytes enable software to deliver dramatically increased functionality and capability. And because the capability increases were delivered in largely the same model, for many applications only modest energy need be focused on performance tuning from processor generation to generation. If you take a ten-year view, we expect to have computing systems with hundreds, even thousands of cores in a single chip. We know its possible from a hardware point of view, the unanswered question is how easy it will be to harness large core count systems – and as a result where the “mainstream” of computing will be in terms of parallelism. The compelling fundamental energetics of parallelism have been well-known in VLSI for two decades — increased parallelism at a given level of performance allows a direct increase in energy efficiency. Consequently, we would all like parallel systems to be easy to program and thereby capture “mainstream” applications. However, software now must face the challenge of scaling with Moore’s law, if large numbers of cores are to be used effectively for a broad range of applications. One way to think about the UPCRC centers is that we’d like to find ways to tap the bounty of parallelism to enable new (and old) applications. To achieve this end, we have chartered the centers as spanning the stack – applications, languages/tools/runtimes, operating systems, and hardware architecture. We expect to see reinvention of layers, “out of the stack” thinking, new abstractions, and of course are hoping for fundamental breakthroughs and compelling new application spaces. All of which of course create and enable use of large-scale parallelism. The UPCRC centers are close technical partnerships, and we expect regular and intense collaboration between industry (Intel and Microsoft) and university researchers (Illinois and Berkeley). Sharing of problems, perspectives, ideas, solutions, and results are all essential, and such intense collaboration has always brought out the best invention. We (Intel and Microsoft) have taken the initiative to create and fund the UPCRC program, but the challenge faces the entire industry and the research community. We are pleased that one exciting outcome of running the competition for UPCRC is broad and increasing interest and activity in the research community around parallel computing in many universities in the US and around the world. The fruit of such interest can only be more rapid progress in the field. We are looking forward to working with the professors and students at both Illinois and Berkeley on cracking these tough challenges and inventing the future! -Andrew Andrew is the VP of Research for Intel, and leads the long-range, exploratory research arm of the company, called Intel Research.
Connect With Us
- nhat phat on How do you package ‘must-have’ security in the Internet of Things world?
- fille infidele on How do you package ‘must-have’ security in the Internet of Things world?
- Divya Kolar on Face Age Progression: Technology that can help bring missing children home
- Edilizia popolare on Face Age Progression: Technology that can help bring missing children home
- Divya Kolar on Intel Labs at Intel Developer Forum 2014
- Big Data
- Connected Car
- Context Aware
- Data Society
- Energy Efficiency
- High Performance Computing
- Intel Labs
- Intel Labs Europe
- People & Practices Research
- Research Day
- Social Computing
- US Innovation
Tags#IntelR&Dday @idf08 Big Data circuits Cloud Computing Ct CTO energy efficient Future Lab Future Lab Radio HPC IDF IDF2008 IDF 2010 Immersive Connected Experiences innovation Intel Intel Labs Intel Labs Europe Intel Research ISSCC Justin Rattner many core microprocessor mobility multi-core parallel computing parallel programming radio Rattner ray tracing research Research@Intel Research At Intel Day Robotics security silicon photonics software development Stanford technology terascale virtual worlds Wi-Fi WiMAX wireless