Imagine leaving the office carrying a briefcase full of work. As you enter your car, the updated family calendar is shown on the dashboard display. With anticipated free time for the new few hours before you get the kids, you stop by the shopping mall along the route home. At the entry to the mall, you pause in front of a digital sign that recognizes you and displays products of interest.A virtual assistant engages you in a brief conversation and then directs you to the store(s) carrying the clothes and shoes you need. It also sends a 20%-discount coupon to your smart phone –another of the many bargains you have received since allowing the mall’s networked-sensor system to gather information and learn about your interests. Driving home, the vehicle taps into vehicle to vehicle (V2V) sensor network that is processed through the cloud and provides up to date routing to avoid traffic and weather/road based hazards to pick up the kids and get home safely. When you arrive home, you are greeted by the family’s robot-maid Zia who begins preparing a stir-fry dinner based on knowledge of what the family members had for lunch, what’s currently in the refrigerator, and what ingredients/dinners they have enjoyed in the past. As the robot pulls ingredients from the fridge, the grocery shopping list is updated automatically. A side panel shows an up to date calendar and real-time information about the location of family members to align dinner time with their arrival. You check on the progress of dinner. Because the Zia still can’t cut shiitake mushrooms proficiently, you cut them instead and explain the technique. Zia records multi-sensory input of your actions for later analysis and learning. Fiction? Today….maybe. In the near future…not at all. Some of the technologies needed to support the above scenarios are in the nascent stage while others still need to be explored. How soon to reality? No one knows for sure but one thing we do know; They all require a tremendous amount of computing resources and open the possibility for new markets and applications for our products. But how do we get there from where we are today? Well that brings me to Intel’s announcement of two new Intel Science and Technology Centers(ISTCs) which will perform research and explore areas to provide a foundation to make these scenarios a reality someday. The two centers will be focused on cloud computing and embedded computing and will be co-located at CMU. This is part of Intel’s strategy of funding high impact research centers at universities. In fact, Intel has committed $100M in funding over the next 5 years to support this endeavor. With two ISTCs centers already having been announced (the Visual Computing Center located at Stanford and the Secure Computing Center located at UC Berkeley) we want to welcome their two sister centers to the fold. Three unique features designed to increase the probability of successful collaboration are a part of the fabric of the ISTC. They are (a) an open collaborative research model which encourages all researchers under the ISTC umbrella to release their results into the public domain creating an open IP model (b) a multidisciplinary approach which means the complete platform is explored (both HW and SW) across multiple engineering disciplines creating an integrated approach to research and finally (c) the “Hands-on” involvement of Intel. Each “Hub” school will provide an academic principal investigator(PI) to work alongside a counterpart PI from Intel. Additionally, Intel labs will provide up to three researchers co-located with the center in addition to the Principal investigator to ensure close (“Hands On”) collaboration with Intel as well as creating a natural technology transfer conduit when the ISTC ends and the embedded resident researchers are assimilated back into the labs on an Intel campus. “Seeding” the Clouds The cloud computing center focuses on enabling new paradigms to make cloud computing of the future more efficient and effective. For instance, broadcasting texting and tweeting does not require the same amount of computing power as video compression and streaming. Yet today, we provide the same amount of computing power to handle both making homogeneous based computing centers energy inefficient. We expect the amount of data handled by cloud centers of the future to only get larger. Large amounts of data require the exploration of Big Data analytics as we seek to efficiently process and stream various data content. As these data and processing centers grow, more automation will be required to facilitate IT support. With this in mind, the ISTC Cloud Computing center will have four main thrust areas: Specialization: Contrary to the common practice of striving for homogeneous cloud deployments, clouds should embrace heterogeneity, purposely including mixes of different platforms specialized for different classes of applications. This pillar explores the use of specialization as a primary means for order of magnitude improvements in efficiency (e.g., energy), including new platform designs based on emerging technologies like non-volatile memory and specialized cores. Automation: Automation is key to driving down the operational costs (human administration, downtime induced losses, and energy usage) of cloud computing. The scale, diversity, and unpredictability of cloud workloads increase both the need for, and the challenge of, automation. This pillar addresses cloud’s particular automation challenges, focusing on order of magnitude efficiency gains from smart resource allocation/scheduling (including automated selection among specialized platforms) and greatly improved problem diagnosis capabilities. Big Data: Cloud activities of the future will be dominated by analytics over large and growing data corpuses. This pillar addresses the critical need for cloud computing to extend beyond traditional big data usage (primarily, search) to efficiently and effectively support Big Data analytics, including the continuous ingest, integration, and exploitation of live data feeds (e.g., video or twitter). To the Edge: Future cloud computing will extend beyond centralized (back-end) resources by encompassing billions of clients and edge devices. The sensors, actuators, and “context” provided by such devices will be among the most valuable content/resources in the cloud. This pillar explores new frameworks for edge/cloud cooperation that (i) can efficiently and effectively exploit this “physical world” content in the cloud, and (ii) enable cloud-assisted client computations, i.e., applications whose execution spans client devices, edge-local cloud resources, and core cloud resources. The center brings together top academic minds from CMU and three other top tier US Schools(UC Berkeley, Georgia Tech, and Princeton). The academic PI is Professor Greg Ganger (CMU) while his counterpart from Intel is Principal Research Scientist Phil Gibbons. Along with them, there will be 21 academic researchers and 3 Intel embedded researchers. Embedded Computing The ISTC-EC center brings together thought leaders from seven different universities (CMU, Georgia Tech, UC Berkeley, University of Illinois at Urbana-Champaign, Penn State, University of Pennsylvania, and Cornell) to drive research and transform experiences in the Retail, Automotive and Home of the future. The popularity of real-time intelligent and personalized technology is growing and the demand for specialized embedded computing systems will correspondingly grow to support a broad range of new applications — many yet to still be envisioned. The ISTC Embedded Computing Center will have four main thrust areas: Collaborative Perception: Perception in embedded applications has unique challenges as it must be performed online and in real-time in the face of limited power, memory and computational resources. The Collaborative Perception theme seeks to explore new ways to do this robustly. Real-time Knowledge Discovery: Machine learning in embedded applications carries with it a host of unique challenges: low-power environments, multiple specialized sensing modalities, complex tradeoffs between pushing computation to the cloud or first processing data locally, and efficiently incorporating vast quantities of local/external data into local computations, etc. Robotics: Robotic toys and vacuum cleaners are starting to inhabit our living spaces, and robotic vehicles have raced across the desert. These successes appear to foreshadow an explosion of robotic applications in our daily lives. However, without advances in robot manipulation and navigation in human environments, many promising applications will not be possible. We are interested in robots that will someday work alongside humans; at home or in the workplace. Embedded System Architecture: The Embedded System theme aims to realize large-scale algorithms such as real-time learning and collaborative perception efficiently, given the unique power, memory and computational resource constraints of embedded systems, the particular context (physical location, proximity), as well as the domain-specific requirements. The center will have two PIs, Priya Narasimhan (associate professor at CMU) and Mei Chen (Intel Labs Research Scientist), driving research and collaboration across the various institutions. Along with them, there will be ten leading researchers from the universities listed above along with 3 Intel embedded researchers and 2 additional embedded researchers from ECG. The future is bright. Let’s keep moving forward.
Connect With Us
Tags#IntelR&Dday @idf08 Big Data Cloud Computing Ct CTO energy efficient Future Lab Future Lab Radio HPC IDF IDF2008 IDF 2010 Immersive Connected Experiences innovation Intel Intel Labs Intel Labs Europe Intel Research ISSCC Justin Rattner many core microprocessor mobility multi-core parallel computing parallel programming radio Rattner ray tracing research Research@Intel Research At Intel Day Robotics security silicon silicon photonics software development Stanford technology terascale virtual worlds Wi-Fi WiMAX wireless