Introducing two “Universal Parallel Computing Research Centers”

Today, it’s a pleasure for me to report that Intel and Microsoft are joining forces to accelerate the mainstream adoption of highly parallel computing technology. Together, the two companies are pioneering the concept of industry-funded “Universal Parallel Computing Research Centers” (UPCRCs) at both the University of California at Berkeley and the University of Illinois at Urbana-Champaign. The two schools were selected in an open competition judged by experts at both companies.

UPCRC-Directors.jpg

It should be no surprise that Intel and Microsoft share the common goal of energizing the academic community in what the president of Stanford University, Professor John Hennessey, called the greatest challenge to computer science in 25 years. These two centers are expected to create long term, high-impact breakthroughs in parallel programming languages, tools, and supporting architectural features that will enable entirely new classes of consumer and enterprise applications. Each center will receive $20 million over five years from Intel and Microsoft. An additional $8 million will come from UIUC, and UC Berkeley has applied for $7 million in funds from a state-supported program to match industry grants. That is serious money by anyone’s measure.

Speaking for Intel, I am tremendously excited by this new approach to funding academic research. We can no longer rely on the government to support the long-term research we need in the universities. Not only do we need them generating the new ideas, we need to eventually hire the students who know the technology and can bring it to life in our products. The transition to mainstream parallel computing will be a historic one for information technology. With the help of these two centers, it will enable new opportunities in entertainment, social interaction and collaboration.

One example that has captured much of my interest the last six months is the shift from a 2D to a 3D Internet. We believe that today’s nascent virtual worlds from Club Penguin to Second Life will soon evolve to become an essential new medium for human interaction and collaboration. The computational requirements, however, to make the 3D Internet truly immersive and personal are beyond anything we can do today. Innovations such as those from these UPCRCs will augment our own efforts towards realizing these future information environments and provide a ready market for our high performance products.

Parallel computing has been in Intel’s blood for more than two decades. In 1985 we shipped the first microprocessor-based parallel supercomputer to Yale University with 128 Intel 80286/80287 processors. If memory serves me, the peak floating point performance of Yale’s machine was about five million floating point operations per second or five megaFLOPS – the equivalent of a typical desktop computer circa 1995. On December 4, 1996, the dream of a parallel computing machine capable of a trillion floating point operations per second (teraFLOPS) speed was realized by the ASCI Red system, built by Intel for the DoE’s Sandia National Laboratory.

In 2004 we decided to that it was time to explore TeraFLOPS capability at the single chip level by integrating many IA-compatible cores on one die. Within our Corporate Technology Group, we committed a substantial percentage of our resources to launch our Tera-scale Computing Research Program, which we announced publicly in 2006. Tera-scale was a holistic HW/SW program to enable mainstream many-core microprocessors and systems. With an 80-core Teraflops Research Processor up and running in the lab, and our first highly parallel product architecture (Larrabee) on track for first silicon later this year, we are well on our way to delivering tera-scale hardware. However, we must do more to make sure average programmers can make full use of Larrabee’s amazing capabilities. That’s why the UPCRC funding activity is essential: helping ordinary programmers write efficient parallel programs for Larrabee and our mainstream multi-core processors.

Our experience as a long-time developer and supporter of current parallel programming standards, such as OpenMP, and as a leading provider of parallel software development tools, such as our Threading Building Blocks, helps us to understand how much work there is to be done. Despite years of work in the high performance computing community, developing parallel software still requires PhD level programming know-how. While we are making good progress in the lab with software technologies such as transactional memory, and our data parallel Ct API, we realized we needed to harness innovation across industry and academia to break parallel computing to the masses. And that’s why the investment in the centers made so much sense.

Berkeley has a 20-year tradition of doing genuinely integrated system projects with many faculty members tackling a common goal. Each faculty member on this project will be a recognized expert in his or her discipline of interest. The UPCRC research at Berkeley will be led by David Patterson. David is known for his ability to identify critical questions for the computer science community and gather interdisciplinary groups of faculty and graduate students to answer them. He currently heads Berkley’s Par-lab, focused on parallel computing.

[Ed. Note: See Cheryl's blog for a video David and team]

Likewise, the University of Illinois has been a leading institution in parallel computing research for more than four decades and has helped define the landscape of parallel processing multiprocessors. The UPCRC efforts here will be led by Profs. Marc Snir and Wen-Mei Hwu. Marc is the director of the Illinois Informatics Institute. Previous to his work at UIUC, he initiated and led the IBM Blue Gene project. Wen-Mei’s team created the first HP-PD compiler, which was used by Intel in the early Itanium design process.

Intel and Microsoft will work together with David, Marc, and Wen-Mei to direct two five-year research efforts under the banner of the UPCRC. Intel views these close academic collaborations as critical components in enabling a shift to desktops and laptops based on many-core chips. We’ve already launched a variety of multi-core software products and, just last week, the Intel Academic Community. We have created a sizable research program in Tera-scale computing and funded numerous individual parallel computing researches. These UPCRCs dramatically increase our investment in academic research.

Making parallel computing pervasive will one day be seen as one of the greatest accomplishments of the 21st century. But enough speculation — let’s get to work.

4 Responses to Introducing two “Universal Parallel Computing Research Centers”

  1. HPC communities have been struggling much on threads and message passing programming for years. With the advent of new hardware architecture other than multi-core processors, such as Cell, GPU and FPGA, the software side is becoming even more chaotic than ever. Working with the current state of art, it’s almost certain that one will only program and optimize for only one of the hardware architectures.
    It’ll be truly useful to develop a unified programming model that can be implemented efficiently on these hardware. Parallel programming tools, libraries can then get onto their right places quickly.
    It’s interesting to find that the David and Wen-Mei groups are playing complementing roles. While David’s group will work more with a top-down approach, deriving the right models from wide range of applications, Wen-Mei’s group will work more with a bottom-up approach, developing parallel program analysis techniques.

  2. The future in hardware? We must start from software. The current software model is over. The best practice or any kind of “extreme” cannot solve the problem. Waste of time and money. I have a fundamental invention in software: The Universal Language of the Informational Space based on a new universal informational entity called the Informational Individual. From now forever. This is a new conceptual approach in software, a new kind of thinking in a new context and perspective! It is a matter of time. I’m in Romania, not in US. The new model is very simple. The hardware model should go on this model.

  3. Louis Savain says:

    The thread (algorithmic) software model has been around for over 150 years (Babbage). Isn’t it time to change? Single threading is just as bad as multithreading. They are both algorithmic. What is needed is a non-algorithmic, synchronous reactive model that is inherently parallel and deterministic. This is true whether or not the processor is single or multicore. In fact, the programmer should not have to care. Wake up, Intel! You people are asleep at the wheel, in my opinion. You might wake up on the wrong side of the next computer revolution if you’re not careful.
    A bon entendeur, salut!