The promise of personalized medicine relies heavily on high performance computing (HPC). Speed and power influence the genome sequence process and ultimately patient treatment plans. With the SC14 Conference coming up next month, we caught… Read more
Recent Blog Posts
Today Intel delivered a keynote address (link to keynote content) to 1000+ attendees at the Open Compute Project European Summit, in Paris. The keynote, delivered by Intel GM Billy Cox, covered Intel’s strategy to accelerate the digital services economy by delivering disruptive technology innovation founded on industry standards. The foundation of Intel’s strategy is an expansion of silicon innovation to augment its traditional Xeon, Xeon Phi and Atom solutions with expansion of product offerings through new standard SKUs and custom solutions based on specific workload requirements. Intel is expanding its data center SoC product line with the planned introduction of a Xeon based SoC in early 2015, which is sampling now. This will be Intel’s 3rd generation 64-bit SoC solution.
To further highlight this disruptive innovation, Cox described how Intel is working closely with industry leaders Facebook and Microsoft on separate collaborative engineering efforts to deliver innovative and more efficient solutions for the data center. Cox detailed how Intel and Facebook engineers worked together on Facebook’s delivery of the new Honey Badger storage server for their photo storage tier featuring the Intel® Atom™ processor C2000, a 64-bit system-on-chip. The high capacity, high density storage server offers up to 180TB in a 2U form factor and is expected to be deployed in 1H’15. Cox also detailed how Microsoft has completed the 2nd generation Open Cloud Server (OCSv2) specification. Intel and Microsoft have jointly developed a board to go into OCSv2 that features a dual-processor design, built on the Intel Xeon E5-2600 v3 series processor that enables 28 cores of compute power per blade.
Collaboration with Open Compute reflects Intel’s decades long history of collaborating with industry organizations to accelerate computing innovation. As one of the 5 founding board members of the Open Compute Project, we are deeply committed to enabling broad industry innovation by openly sharing specifications and best practices for high efficiency data center infrastructure. Intel is involved in many OCP working group initiatives spanning rack, compute, storage, network, C&I and management which are strategically aligned with our vision of accelerating rack scale optimization for cloud computing.
At the summit, Intel and industry partners are demonstrating production hardware based on our Open Compute specifications. We look forward to working with the community to help push datacenter innovation forward.
This last weekend, over one hundred and thirty makers and developers got together in Brooklyn, New York for an Intel® IoT Roadshow. As announced at IDF, this fall, Intel® is bringing ten IoT Roadshow… Read more
A hackathon is an event where people get together and make things from scratch over a solid block of time. A codefest is a subtype of hackathon, focused exclusively on software development. A… Read more
As I discuss the path to cloud with customers, one topic that is likely to come up is OpenStack. It’s easy to understand the inherent value in OpenStack as an open source orchestration solution, but this value is balanced by ever present questions on OpenStack’s readiness for the complex environments found in telco and enterprise. Will OpenStack emerge as a leading presence in these environments, and in what timeframe? What have lead adopters experienced with early implementations and POCs…are there pitfalls to avoid, and how can we use these learnings to drive the next wave of adoption?
This was most recently a theme at the Intel Developer Forum where I caught up with Intel’s Jonathan Donaldson and Das Kamhout on Intel’s strategy for orchestration and effort to take key learnings from the world’s most sophisticated data centers to apply to broad implementations. However, Intel is certainly not new to the OpenStack arena having been involved in the community from its earliest days and more recently having delivered Service Assurance Administrator, a key tool that enables OpenStack environments better insight into underlying infrastructure attributes. Intel has even helped lead the charge of enterprise implementation with integration of OpenStack into Intel’s own internal cloud environment.
These lingering questions on broad enterprise and telco adoption, however, make the upcoming OpenStack Summit a must attend event for me this month. With an event loaded with discussions from leading enterprise and telco experts from companies like BMW, Telefonica, and Workday on their experiences with OpenStack, I’m expecting to get much closer to the art of the possible in OpenStack deployment as well as learn more about how OpenStack providers are progressing with enterprise friendly offerings. If you’re attending the Summit please be sure to check out Intel’s line up of sessions and technology demonstrations and connect with Intel executives on site discussing our engagements in the OpenStack community and work with partners and end customers to help drive broad use of OpenStack into enterprise and telco environments. If you don’t have the Summit in your travel plans, never fear. Intel will help bring the conference to you! I’ll be hosting two days of livecast interviews from the floor of the Summit. We’ll also be publishing a daily recap of the event on the DataStack with video highlights, the best comments from the Twitterverse, and much more. Please send input on the topics that you want to hear about coming from OpenStack to ensure that our updates match the topics you care about. #OpenStack
This week, we are taking Meshcentral for the first time into a completely new direction with the introduction of user-to-user text messaging. This allows users to establish relationships and send… Read more
Intel® RealSense™ Technology allows us to recognize and understand inputs from our hands, face, and speech, as well as the surrounding environment. Intel RealSense Technology can be fully leveraged… Read more
For an enterprise attempting to maximize energy efficiency, the data center has long been one of the greatest sticking points. A growing emphasis on cloud and mobile means growing data centers, and by nature, they demand a gargantuan level of energy in order to function. And according to a recent survey on global electricity usage, data centers are sucking more energy than ever before.
George Leopold, senior editor at EnterpriseTech, recently dissected Mark P. Mills’ study entitled, “The Cloud Begins With Coal: Big Data, Big Networks, Big Infrastructure, And Big Power.” The important grain of salt surrounding the survey is that funding stemmed from the National Mining Association and the American Coalition for Clean Coal Electricity, but there were some stark statistics that shouldn’t be dismissed lightly.
“The average data center in the U.S., for example, is now well past 12 years old — geriatric class tech by ICT standards. Unlike other industrial-classes of electric demand, newer data facilities see higher, not lower, power densities. A single refrigerator-sized rack of servers in a data center already requires more power than an entire home, with the average power per rack rising 40% in the past five years to over 5 kW, and the latest state-of-the-art systems hitting 26 kW per rack on track to doubling.”
More Power With Less Energy
As Leopold points out in his article, providers are developing solutions to circumvent growing demand while still cutting carbon footprint. IT leaders can rethink energy usage by concentrating on air distribution and attempting assorted cooling methods. This ranges from containment cooling to hot huts (a method pioneered by Google). And thorium-based nuclear reactors are gaining traction in China, but don’t necessarily solve waste issues.
If the average data center in the U.S. is older than 12-years old, IT leaders need to start looking at the tech powering their data center and rethink the demand on the horizon. Perhaps the best way to go about this is thinking about the foundation of the data center at hand.
Analysis From the Ground Up
Intel IT has three primary areas of concern when choosing a new data center site: environmental conditions, fiber and communications infrastructure, and power infrastructure. These three criteria bear the greatest weight on the eventual success — or failure — of a data center. So when you think about your data center site in the context of the given criteria, ask yourself: Was the initial strategy wise? How does the threat proximity compare to the resource proximity? What does the surrounding infrastructure look like and how does that affect the data center? If you could go the greenfield route and build an entirely new site, what would you retain and what would you change?
Every data center manager in every enterprise has likely considered the almost counterintuitive concept that more power can come with less energy. But doing more with less has been the mantra since the beginning of IT. It’s a challenge inherent to the profession. Here at Intel, we’ll continue to provide invaluable resources to managers looking to get the most out of their data center.
Graphics driver 184.108.40.20658 has been posted to Download Center. Here are the direct links to the drivers:
32 bit: https://downloadcenter.intel.com/Detail_Desc.aspx?%20agr=Y&DwnldID=24346
64… Read more
Across the globe, power grids are being modernized and made smarter by a host of new technologies such as sensors, metering solutions and energy management systems, creating a variety of data sets that deliver deeper insights into the infrastructure’s operations and performance. … Read more > Read more
Health IT is a hot topic in the Empire State. New York was the first state to host an open health data site and is now in the process of building the Statewide Health Information Network of New York. The SHIN-NY will enable providers to access patient records from anywhere in the state.
To learn more, we caught up with Howard A. Zucker, MD, JD, who was 22 when he got his MD from George Washington University School of Medicine and became one of America’s youngest doctors. Today, Zucker is the Acting Commissioner of Health for New York State, a post he assumed in May 2014. Like his predecessor Nirav R. Shah, MD, MPH, Zucker is a technology enthusiast, who sees EHRs, mobile apps and telehealth as key components to improving our health care system. Here, he shares his thoughts.
What’s your vision for patient care in New York in the next five years?
Zucker: Patient care will be a more seamless experience for many reasons. Technology will allow for further connectivity. Patients will have access to their health information through patient portals. Providers will share information on the SHIN-NY. All of this will make patient care more fluid, so that no matter where you go – a hospital, your doctor’s office or the local pharmacy – providers will be able to know your health history and deliver better quality, more individualized care. And we will do this while safeguarding patient privacy.
I also see a larger proportion of patient care taking place in the home. Doctors will take advantage of technologies like Skype and telemedicine to deliver that care. This will happen as patients take more ownership of their health. Devices like FitBit amass data about health and take steps to improve it. It’s a technology still in its infancy, but it’s going to play a major role in long term care.
How will technology shape health care in New York and beyond?
Zucker: Technology in health and medicine is rapidly expanding – it’s already started. Genomics and proteomics will one day lead to customized medicine and treatments tailored to the individual. Mobile technology will provide patient data to change behaviors. Patients and doctors alike will use this type of technology. As a result, patients will truly begin to “own” their health.
Personally, I’d like to see greater use of technology for long-term care. Many people I know are dealing with aging parents and scrambling to figure out what to do. I think technology will enable more people to age in place in ways that have yet to unfold.
What hurdles do you see in New York and how can you get around those?
Zucker: Interoperability remains an ongoing concern. If computers can’t talk to each other, then this seamless experience will be extremely challenging.
We also need doctors to embrace and adopt EHRs. Many of them are still using paper records. But it’s challenging to set up an EHR when you have patients waiting to be seen and so many other clinical care obligations. Somehow, we need to find a way to make the adoption and implementation process less burdensome. Financial incentives alone won’t work.
How will mobility play into providing better patient care in New York?
Zucker: The human body is constantly giving us information, but only recently have we begun to figure out ways to receive that data using mobile technology. Once we’ve mastered this, we’re going to significantly improve patient care.
We already have technology that collects data from phones, and we have sensors that monitor heart rate, activity levels and sleep patterns. More advanced tools will track blood glucose levels, blood oxygen and stress levels.
How will New York use all this patient-generated health data?
Zucker: We have numerous plans for all this data, but the most important will be using it to better prevent, diagnose and treat disease. Someday soon, the data will help us find early biomarkers of disease, so that we can predict illness well in advance of the onset of symptoms. We will be able to use the data to make more informed decisions on patient care.
Stu Goldstein is a Market Development Manager in the Communications and Storage Infrastructure Group at Intel
When purchasing a new laptop for my sons as they went off to college, a big part of the brand decision revolved around support and my peace of mind. Sure enough, one of my sons blew his motherboard when he plugged into an outlet while spending a summer working in China, and the other trashed his display when playing jump rope with his power cord, pulling his PC off the bed. In both cases I came away feeling good about the support received from the brand that I trusted.
So, knowing a bit more about how I think, it should not be a big surprise that I see the EMC Hybrid Cloud announcement today as important. Enterprises moving to Converged Software Defined Storage Infrastructures should have choices. EMC is offering the Enterprise the opportunity to evolve without abandoning a successfully engineered infrastructure, including the support that will inevitably be needed. The creation of products that maximize existing investments, while providing the necessary path to a secure hybrid cloud is proof of EMC’s commitment to choice. Providing agility moving forward without short circuiting security and governance can be difficult; EMC’s announcement today recognizes the challenge. Offering a VMware edition is not surprising; neither is the good news about supporting a Microsoft edition. However, a commitment to “Fully Engineered OpenStack Solutions,” is a big deal. Intel is a big contributor to open source including OpenStack, so it is great to see this focus from EMC.
EMC has proven over the last several years that they can apply much of the underlying technologies that the Intel® Xeon® processors combined with Intel® Ethernet Converged Network Adapters have to offer. When Intel provided solutions that increased memory bandwidth by 60% and doubled I/O bandwidth generation over generation, EMC immediately asked, “what’s next?” Using these performance features coupled with Intel virtualization advances, the VMAX³ and VNX solutions prove EMC is capable of moving Any Data, Anytime, Anywhere while maintaining VMs that are isolated to allow for secure shared tenancy. Now EMC is intent on proving it is serious about expanding the meaning of Anywhere. (BTW, the XtremIO Scale Out products are a great example of Anytime, using Intel Architecture advancements to maintain a consistent 99% latency at less than 1 ms to provide the steady performance that customers need to take the most advantage possible of this all flash array). EMC is in a unique position to offer customers of its products in the enterprise the ability to extend benefits derived from highly optimized deduplication, compression, flash, memory, I/O and virtualization technology into the public cloud.
Getting back to support which is a broad term that comes into laser focus when you need it. It has to come from a trusted source no matter whether your storage scales up, out, is open, sort of open or proprietary. It costs something whether you rely on open source distros, OEMs or smart people hired to build and support home grown solutions. EMC’s Hybrid Cloud announcement is a recognition that adding IaaS needs backing that covers you inside and out or maybe said another way, from the inside into the outside. I look forward to seeing what IT Managers do with EMC’s choices and the innovation this initiative brings to the cloud.
Intel® RealSense™ technology makes it possible for our digital worlds to interact with our physical, organic worlds in meaningful ways. Many of the projects that developers are creating step across… Read more
The selfie, animated emoticons, Facebook timelines, Pinterest boards and Tumblrs are just the tip of the self-expression explosion, which is making content creators out of us all. But for those feeling creative but perhaps a little camera shy, Pocket Avatars … Read more >
Are your employees’ devices optimized for effective collaboration? The tech-savvy workforce is presenting enormous opportunities in the enterprise. Employees are increasingly aware of new technologies and how they can integrate them into their work. One of the biggest changes in enterprise technology is the advent of collaboration tools to support the demands of this emerging workforce. Tech-savvy workers demand flexibility and are challenging enterprise IT leaders to adopt solutions that take full advantage of their technical abilities.
Collaboration & Flexibility
Many companies have identified the benefits of flexible and remote working policies, including an increase in productivity and morale, and a decrease in overhead. In order to empower employees with the tools needed to be successful in these progressive working conditions, it’s incumbent on IT leaders to build device and software strategies that support employees.
Two of the most popular collaboration software solutions are Microsoft Lync and Skype. Skype is a familiar, robust video conferencing platform that provides employees with valuable face-to-face interactions without having to book a conference room. The software also offers file sharing and group chat functionality to support communication and collaboration, and its adoption among consumers makes it an ideal platform to communicate with clients. Microsoft Lync is a powerful enterprise-level collaboration tool that facilitates multi-party video conferencing, PowerPoint slide sharing, real-time polling, call recording, file sharing, and more.
Not All Devices Are Created Equal
Both of these solutions offer flexible ways for employees to collaborate and communicate from anywhere. Although both Skype and Microsoft Lync are available as stand alone apps on many platforms, some features may not be available on all devices. In a recent comparison, Principled Technologies found that popular tablets such as the iPad Air and Samsung Galaxy Note 10.1 were unable to utilize key collaboration features like group video chat or multiple file transfers. However, the Intel-powered Microsoft Surface Pro 3 offered users a full-featured experience with both apps. Additionally, the Surface Pro 3 boasted perceptibly higher video quality than the competition during both Skype and Microsoft Lync meetings. For IT leaders looking to support collaboration in the enterprise, the message is clear: don’t let hardware be a roadblock to employee success. Give them tools that work.
Click here to read the full Principled Technologies device comparison.
In what’s become known as the smart grid, big data analytics has been a hot topic for a while now. As a result of the significant evolution in big data technology, we’ve developed a number of different use cases at Intel. All have been driven by specific business needs, and more importantly, by the need to analyze the right data at the right time.
From mining smart meter data in order to enable better customer billing and engagement, to analyzing syncrophasor data from transmission lines in order to reduce losses in real-time, big data analytics has become a vital part of overall business decision-making. For data collected over a period of time — say a number of months — we often see analytics used as follows:
Descriptive Analytics: Taking data and analyzing it to see what happened. Use cases can range from calculating how much energy was consumed in order to generate a bill, to identifying the performance of a transformer over time.
Diagnostic Analytics: Analyzing and identifying why something happened. A prime use case would be identifying if there were any data patterns from a failed transformer. Do these patterns indicate degradation over time?
Predictive Analytics: Isolating a pattern and using it to determine what to watch out for. If you know what a failing transformer looks like, you can proactively check the data for all existing transformers of the same model. Are any of them showing signs of degradation?
Prescriptive Analytics: Developing strategies based on previously identified patterns. This moves the IT asset management strategy from a calendar-based approach to a more predictive one. Many would say that this is required if automated demand and response is to become a reality in the smart grid.
Pre-emptive Analytics: Focusing on the “what ifs”. This level of analysis is crucial when considering overall grid reliability and stability. With access to real-time data in the grid, you can run a real-time simulation to see the actual effects of an asset failure on all other assets.
The dream is that all the data that exists within the grid — regardless of the data type (structured or unstructured) — is available to be analyzed real-time in all sorts of ways to gain wonderful insight. Think of the technology shown in the movie “Minority Report.” The film shows amazing systems that allow people to manipulate vast volumes of data in 3D. With elaborate hand movements, the users are able to look for undiscovered linkages between the data. This kind of useful analysis could be easily monetized. While we still have a long ways to go before this dream become a reality, the current big data deployments of today give hope for that future.
With this large range of solution requirements in businesses today, Intel’s Xeon® processors offer the ideal package of capabilities for computing and deploying different solutions. Technologies In-Memory Databases (like SAP HANA), Hadoop implementations from Cloudera, different real-time optimized solutions and full High Performance Computing (HPC) Clusters all deliver solutions aimed at different use cases.
Deciding on the appropriate solution usually comes down to not only considering the use case, but also determining how much ‘real-time’ you require in your analytics. The time and latency parameters for a given analytics solution can widely vary. For example, the time and latency requirement to run a real-time syncrophasor analysis in milliseconds is vastly different than the needs to generate a customized customer bill at the end of the month (think batch job).
What big data use cases have you used or implemented? Let us know..
Find Kevin on LinkedIn
Start a conversation with Kevin on Twitter
Other content from Kevin over at GridInsights
A bit more than nine years of my Intel career, I spent working in Russian offices: in Moscow and Nizhny Novgorod. When I joined the company more than 11 years ago now, I remember that my cube was near the … Read more >
The underpinning for most high performing clouds is a virtualized infrastructure that pools resources for greater physical server consolidation and processor utilization. With the efficiencies associated with pooled resources, some organizations have considered their virtualized environment “cloud computing.” These organizations are selling themselves short. The full promise of cloud—efficiency, cost savings, and agility—can be realized only by automating and orchestrating how these pooled, virtualized resources are utilized.
Virtualization has been in data centers for several years as a successful IT strategy for consolidating servers by deploying more applications on fewer physical systems. The benefits include lower operational costs, reduced heat (from fewer servers), a smaller carbon footprint (less energy required for cooling), faster disaster recovery (virtual provisioning enables faster recovery), and more hardware flexibility.
Source: Why Build a Private Cloud? Virtualization vs. Cloud Computing. Intel (2014).
Cloud takes efficiency to the next level
A fully functioning cloud environment does much more. According to the National Institute of Standards and Technology (NIST), a fully functioning cloud has five essential characteristics:
- On-demand self-service. A consumer can unilaterally provision computing capabilities.
- Broad-network access. Capabilities are available over the network and accessed through standard mechanisms (e.g., mobile phones, tablets, laptops, and workstations).
- Resource pooling. The provider’s computing resources are pooled to serve multiple consumers.
- Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward, commensurate with demand.
- Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (for example, storage, processing, bandwidth, and active user accounts).
Different but complementary strategies
A Forbes* article describes how these highly complementary strategies work together. Virtualization abstracts compute resources—typically as virtual machines (VMs)—with associated storage and networking connectivity. The cloud determines how those virtualized resources are allocated, delivered, and presented. While virtualization is not required to create a cloud environment, it does enable rapid scaling of resources and that the majority of high performing clouds are built upon virtualized infrastructures.
In other words, virtualization pools infrastructure resources and acts as a building block to enhance the agility and business potential of the cloud environment. It is the first step in building a long-term cloud computing strategy that could ultimately include integration with public cloud services—a hybrid deployment model—enabling even greater flexibility and scalability.
With a virtualized data center as its foundation, an on-premises, or private, cloud can make IT operations more efficient as well as increase business agility. IT can offer cloud services across the organization, serving as a broker with providers and avoiding some of the risks associated with shadow IT. Infrastructure as a service (IaaS) and the higher-level platform as a service (PaaS) delivery models are two of the services that can help businesses derive maximum value from the cloud.
Virtualization and cloud computing go hand in hand with virtualization as a critical first step toward fully achieving the value of a private cloud investment and laying the groundwork for a more elastic hybrid model. Delivery of IaaS and PaaS creates exceptional flexibility and agility—offering enormous potential for the organization with IT as a purveyor of possibility.
How did you leverage virtualization to evolve your cloud environment? Comment below to join the discussion.
#ITCenter #Virtualization #Cloud