What will people do with their computers in five, ten or twenty years? How will computers need to change to support these future usage models? And finally, how the heck are we going to program these things?These are the questions that drive us in the Applications Research Lab (and others involved in Tera-scale Computing Research at Intel). We work with the smartest people we can find (both inside and outside Intel) to understand the future. And then we work with the smartest architects we can find to design computers to satisfy these future workloads. Basically, we get paid to imagine the future and then do what it takes to make it happen. Over time, we’ve assembled a great collection of these future-looking workloads. Since we want to help make Intel’s products the best that they can be, we have gathered these into an internal benchmark suite that we share with product groups across Intel. But that isn’t enough. The transition to many core processors is not just an “Intel thing”. The entire industry needs to change. From platforms to programming environments to user interfaces; every facet of computing will be impacted by the many-core future. So we’ve made it a top priority to provide a subset of our workloads outside Intel, to help computer scientists and engineers everywhere (even our competitors) make the transition to a many core world. To be most effective, however, this can’t be an Intel agenda. It needs to be broader; to include academic researchers and over time workloads from across industry. To this end, we’ve helped a team at Princeton University create a new benchmark suite: the Princeton Application Repository for Shared-Memory Computers (PARSEC). You can download these applications and learn more about them from the following technical report: • The PARSEC Benchmark Suite: Characterization and Architectural Implications I won’t list the applications here, but about half of them come from Intel or from groups working with Intel. The rest come from groups working with the Princeton team. So the applications have a broad scope. And these are not HPC benchmarks. These workloads are representative of where we believe computing will move in the future and includes body tracking, ray tracing, gaming physics, and more. I am pretty excited about the release of this benchmark suite and can hardly wait to see what people do with them.
Connect With Us
- s.mcknight on The Third Eye View
- Qingfeng Zhu on The Third Eye View
- Anil on The Third Eye View
- Olajfestmény on Intel and Stanford Researchers Reveal Peptide Chip Details to Categorize Diseases and Analyze Protein Interactions
- Tony Rivers on Intel and Stanford Researchers Reveal Peptide Chip Details to Categorize Diseases and Analyze Protein Interactions
Tags#IntelR&Dday @idf08 Big Data Cloud Computing Ct CTO energy efficient Future Lab Future Lab Radio HPC IDF IDF2008 IDF 2010 Immersive Connected Experiences innovation Intel Intel Labs Intel Labs Europe Intel Research ISSCC Justin Rattner many core microprocessor mobility multi-core parallel computing parallel programming radio Rattner ray tracing research Research@Intel Research At Intel Day Robotics security silicon silicon photonics software development Stanford technology terascale virtual worlds Wi-Fi WiMAX wireless