Andrew Chien on UPCRC is a Major Commitment to Long-range Parallel Computing Research

I can’t help but feel the excitement and optimism that accompanies the launching of a bold new venture which will involve nearly 90 talented researchers focused on parallel computing. We’ve got great partners in Microsoft, Berkeley, and Illinois, an exciting technical focus, and a commitment to create fundamental breakthroughs in parallel computing – applications, software, and hardware. The awareness that we’re attacking a critical problem for the entire computing only fuels the adrenaline! Why are we doing this?

It’s because we’re back at the beginning of Moore’s Law, once again. Well not precisely, but at least metaphorically. When Gordon Moore penned his now famous tome [Moore65], the advanced state of integrated circuits had reached the incredible level of 50 transistors per chip, and predicted that by 1975, integrated circuits could have 65,000 devices – a target we know was far exceeded by the industry. Today, the industry delivers billions of transistors per chip. Yet, in that moment, farsighted technical leaders understood that complex semiconductor chips (whole processors and even systems) would be possible to manufacture at low-cost and high reliability. To achieve that goal, industry has delivered a whole series of breakthrough technologies to bring us from hand drawn devices to high-level synthesis, from simple testing to complex validation and formal verification, from simple to extraordinarily complex processes (# layers, lithography, exotic materials). Now focus on that long-term opportunity didn’t distract the industry from near-term needs, and we have a broad array of internal investments and programs which support programming of multi-core systems in the near term. The UPCRC program is focused on the long-term opportunity.

In a real sense, software applications have been the direct beneficiary of Moore’s law, as the advent of giga-ops and gigabytes enable software to deliver dramatically increased functionality and capability. And because the capability increases were delivered in largely the same model, for many applications only modest energy need be focused on performance tuning from processor generation to generation. If you take a ten-year view, we expect to have computing systems with hundreds, even thousands of cores in a single chip. We know its possible from a hardware point of view, the unanswered question is how easy it will be to harness large core count systems – and as a result where the “mainstream” of computing will be in terms of parallelism. The compelling fundamental energetics of parallelism have been well-known in VLSI for two decades — increased parallelism at a given level of performance allows a direct increase in energy efficiency. Consequently, we would all like parallel systems to be easy to program and thereby capture “mainstream” applications. However, software now must face the challenge of scaling with Moore’s law, if large numbers of cores are to be used effectively for a broad range of applications.

One way to think about the UPCRC centers is that we’d like to find ways to tap the bounty of parallelism to enable new (and old) applications. To achieve this end, we have chartered the centers as spanning the stack – applications, languages/tools/runtimes, operating systems, and hardware architecture. We expect to see reinvention of layers, “out of the stack” thinking, new abstractions, and of course are hoping for fundamental breakthroughs and compelling new application spaces. All of which of course create and enable use of large-scale parallelism.

The UPCRC centers are close technical partnerships, and we expect regular and intense collaboration between industry (Intel and Microsoft) and university researchers (Illinois and Berkeley). Sharing of problems, perspectives, ideas, solutions, and results are all essential, and such intense collaboration has always brought out the best invention. We (Intel and Microsoft) have taken the initiative to create and fund the UPCRC program, but the challenge faces the entire industry and the research community. We are pleased that one exciting outcome of running the competition for UPCRC is broad and increasing interest and activity in the research community around parallel computing in many universities in the US and around the world. The fruit of such interest can only be more rapid progress in the field.

We are looking forward to working with the professors and students at both Illinois and Berkeley on cracking these tough challenges and inventing the future!

-Andrew

Andrew is the VP of Research for Intel, and leads the long-range, exploratory research arm of the company, called Intel Research.

4 Responses to Andrew Chien on UPCRC is a Major Commitment to Long-range Parallel Computing Research

  1. Charlie Bess says:

    I see lots of hardware folks involvement, but it seems to me the shifts on the application side will be even more extreme. Not just on how the software is developed, but also on new avenues for the use of software based on opportunity, not just need. Is the team looking at this as well??

  2. For software development, wouldn’t using functional programming languages be a perfect fit with parallel and multi-core computing architectures?
    Richard Gabriel discusses “The Design of Parallel Programming Languages” in his 1991 paper at http://www.dreamsongs.com/10ideas.html
    Clifford Walinsky and Deb Banerjee also discuss “A functional programming language compiler for massively parallel computers”, cited at http://portal.acm.org/citation.cfm?id=91556.91610
    John Backus in his 1977 ACM Turing Award lecture titled, “Can Programming be liberated from the von Neumann style?”, summarizes “Only when these models and their applicative languages have proved their superiority over conventional languages will we have the economic basis to develop the new kind of computer that can best implement them. Only then, perhaps, will we be able to fully utilize large-scale integrated circuits in a computer design not limited by the von Neumann bottleneck.” The paper can be found at http://www.stanford.edu/class/cs242/readings/backus.pdf

  3. Uday says:

    Yup, I think its a valid argument. Imperative approaches wouldn’t give an easy nor intuitive way to program super computers, except for those who work on it 24×7.
    The most advanced implementation of FP language would be Haskell (each new FP language is more or less a superset of the previous one, namely because they all use the same underlying l calculus).
    I come into the world of Haskell being a seasoned C++ programmer and the turn is a drastic, radical one. Its intresting though, the approaches are orthogonal.

  4. Ken Moore says:

    SISAL (Streams and Iteration in a Single Assignment Language) was implemented during the early 1990s as a compiler outputting C. I used it for 14 months prior to my retirement in 1994, on a two-processor mini-Cray, on test problems with potential parallelism. The change of mind set that it required was much less of a challenge than explicit control of synchronization by message passing would have been.
    SISAL is not strictly functional throughout, because of the iteration it provided, with loop variables having a special status. In practice, I found it easy to keep this potential “contamination” under control.