Big data sets require big compute power. Machine translation, speech recognition and video processing are some examples of these big data sets. Since they are so large, these big data sets usually require parallel processing and parallel storage systems, often using clustered resources that are accessed over the web. This is the basic model that is referred to as Cloud Computing. Researchers at Intel’s lab in Pittsburgh have two initiatives that support this cloud computing paradigm – OpenCirrus and Tashi.OpenCirrus is a joint initiative started by Intel, HP and Yahoo as well as academic partners. In a back room of the lab, server racks with nearly 1000 cores were running the Tashi operating environment software that manages the 400 terabytes of resources in the cluster. Individuals and groups can submit proposals to utilize these large clusters to run experiments and develop applications.
Connect With Us
- gta on What makes a super computer become a super computer?
- Profilebaker on Meet the “New” Makers: They Love Electronics, but Aren’t Necessarily Techies
- gk-edv on The Internet of Things will overtake you only if you let it
- Negin Owliaei on The Internet of Things will overtake you only if you let it
- website packages on Ask the Expert: The Internet of Things
Tags#IntelR&Dday @idf08 Big Data circuits Cloud Computing Ct CTO energy efficient Future Lab Future Lab Radio HPC IDF IDF2008 IDF 2010 Immersive Connected Experiences innovation Intel Intel Labs Intel Labs Europe Intel Research ISSCC Justin Rattner many core microprocessor mobility multi-core parallel computing parallel programming radio Rattner ray tracing research Research@Intel Research At Intel Day Robotics security silicon photonics software development Stanford technology terascale virtual worlds Wi-Fi WiMAX wireless