Intel and DARPA Collaborate on $49M Research Effort in Extreme Computing

DARPA, the legendary research arm of the US Department of Defense, recently announced that it had funded Intel, along with three other organizations, to develop ubiquitous high performance computing (UHPC) prototypes for completion by 2018. Intel’s UHPC effort is unique in that DARPA and Intel are co-equal investors in the $49M research effort. The project will focus on new circuit topologies, new chip and system architectures, and new programming techniques to reduce the amount of energy required per computation by two to three orders of magnitude. In other words, from 100x to 1000x less energy per computation than what our most efficient computing systems consume today. Such dramatic reduction in power consumption will allow these extreme-scale systems to take full advantage of the increasing transistor budgets afforded by the steady advance of Moore’s Law. First postulated by Gordon Moore, Intel’s co-founder, the law observes that the number of transistors per chip roughly doubles every two years. If we fail to reduce the amount of energy per computation, we won’t be able to use all the transistors we can build, or won’t be able to operate all of them at anywhere close to their maximum speeds.

While the improvements in both energy efficiency and programmer productivity are intended to meet DoD’s rapidly growing performance demands of its vast sensor networks, simulation environments, and complex information systems, extreme-scale computing has broad application across the entire computing continuum. Let me give you two examples. First, consider the exascale supercomputers now being expected for late this decade by the HPC community. If we simply scaled one of today’s petaFLOPS supercomputers, which is capable of 1015 floating point operations per second, to exascale (1018 FLOPS) levels, we’d need a battery of nuclear power stations to supply its six gigawatts (6GW) of electrical power. With a useful limit of about 20 megawatts (20MW) of power in an HPC datacenter, we need roughly a 300x improvement in total system energy efficiency to build a practical and deployable exascale supercomputer. Second, consider an end-of-the-decade smartphone with extreme sensing capabilities requiring 100 gigaFLOPS of computing power. Without the anticipated breakthroughs in extreme-scale technology, such a phone would need a very, very big battery delivering 600 watts of instantaneous power. Think motorcycle battery and you’ve got the idea. Even if we could somehow deliver that much power, it would be very unpleasant to hold the phone, not to mention the battery, in your hand. If we are successful in developing the extreme-scale technology anticipated by the UHPC program, 100GF would consume a mere two watts of power or even less. We’re talking thin batteries and low case temperatures for one wicked-smart phone.

Intel’s UHPC Principal Investigator, which is DARPA-speak for the lead researcher, is Shekhar Borkar, an Intel Fellow and currently the head of the Academic Programs and Research unit at Intel Labs. Shekhar and his team are taking a fresh look at everything involved in designing extreme scale systems to be sure we can achieve DARPA’s goals of being both energy efficient in the extreme as well as being highly programmable. The challenges are great, but so is the talent Shekhar has assembled to attack them. Beside our own world-class circuits and systems researchers, Intel’s partners include top computer science and engineering faculty at the University of Delaware, the University of Illinois at Urbana-Champaign, the University of California, San Diego, as well as top industrial researchers at Reservoir Labs and ET International. Our UHPC program will also engage several leading manufacturers of compute-intensive systems, including SGI, Lockheed Martin, and Cray to review our work and help guide us to commercially practical, extreme-scale solutions. Other collaborators include Micron Technology on advanced memory chip design and Sandia National Laboratories on future applications development.

Extreme scale computing represents both an enormous challenge and an enormous opportunity to rethink the way we’ve been building computing systems since the advent of the microprocessor. We’ve had a relatively easy ride getting to where we are based on our ability to scale transistor size, but the road ahead is going to be much more difficult given the power constraints imposed by virtually every application from the smallest embedded devices to the largest supercomputers. The word frugal doesn’t begin to capture how efficient we’ll have to be in order to meet DARPA’s objectives for UHPC.

3 Responses to Intel and DARPA Collaborate on $49M Research Effort in Extreme Computing

  1. Kenneth says:

    This is absolutely brilliant and inspiring at the same time! Down with Moore’s Law? I can only wish to be apart of the team that tackles this challenge. Good luck? Nah… Good transistors!

  2. Rajiv says:

    This is Intel at its best, taking on enormous challenges with the promise of bringing about paradigm shifts in computing. Hats off to Shekhar Borkar and the entire Intel team for leading this program.

  3. J. A. says:

    In order to meet DARPA’s objectives for UHPC, consider first the conceptual evolution from binary to higher n-ary order computing; binary (256 possibilities per byte), ternary (6.561 possibilities per byte), quaternary (65.536 possibilities per byte) and so forth. Possibilities mean a direct and exponential increase in the computing velocity and capacity.