Recent Blog Posts

Blueprint: SDN’s Impact on Data Center Power/Cooling Costs

This article originally appeared on Converge Digests Monday, October 13, 2014


intel-cloud-graphic.PNGThe growing interest in software-defined networking (SDN) is understandable. Compared to traditional static networking approaches, the inherent flexibility of SDN compliments highly virtualized systems and environments that can expand or contract in an efficient business oriented way. That said, flexibility is not the main driver behind SDN adoption. Early adopters and industry watchers cite cost as a primary motivation.




SDN certainly offers great potential for simplifying network configuration and management, and raising the overall level of automation. However, SDN will also introduce profound changes to the data center. Reconfiguring networks on the fly introduces fluid conditions within the data center.



How will the more dynamic infrastructures impact critical data center resources – power and cooling?


In the past, 20 to 40 percent of data center resources were typically idle at any given time and yet still drawing power and dissipating heat. As energy costs have risen over the years, data centers have had to pay more attention to this waste and look for ways to keep the utility bills within budget. For example, many data centers have bumped up the thermostat to save on cooling costs.



These types of easy fixes, however, quickly fall short in the data centers associated with highly dynamic infrastructures. As network configurations change, so do the workloads on the servers, and network optimization must therefore take into consideration the data center impact.



Modern energy management solutions equip data center managers to solve this problem. They make it possible to see the big picture for energy use in the data center, even in environments that are continuously changing.  Holistic in nature, the best-in-class solutions automate the real-time gathering of power levels throughout the data center as well as server inlet temperatures for fine-grained visibility of both energy and temperature. This information is provided by today’s data center equipment, and the energy management solutions make it possible to turn this information into cost-effective management practices.



The energy management solutions can also give IT intuitive, graphical views of both real-time and historical data. The visual maps make it easy to identify and understand the thermal zones and energy usage patterns for a row or group of racks within one or multiple data center sites.



Collecting and analyzing this information makes it possible to evolve very proactive practices for data center and infrastructure management. For example, hot spots can be identified early, before they damage equipment or disrupt services. Logged data can be used to optimize rack configurations and server provisioning in response to network changes or for capacity planning.



Some of the same solutions that automate monitoring can also introduce control features. Server power capping can be introduced to ensure that any workload shifts do not result in harmful power spikes. Power thresholds make it possible to identify and adjust conditions to extend the life of the infrastructure.



To control server performance and quality of service, advanced energy management solutions also make it possible to balance power and server processor operating frequencies. The combination of power capping and frequency adjustments gives data center managers the ability to intelligently control and automate the allocation of server assets within a dynamic environment.



Early deployments are validating the potential for SDN, but data center managers should take time to consider the indirect and direct impacts of this or any disruptive technology so that expectations can be set accordingly. SDN is just one trend that puts more pressure on IT to be able to do more with less.



Management expects to see costs go down; users expect to see 100% uptime for the services they need to do their jobs. More than ever, IT needs the right tools to oversee the resources they are being asked to deploy and configure more rapidly. They need to know the impacts of any change on the resource allocations within the data center.



IT teams planning for SDN must also consider the increasing regulations and availability restrictions relating to energy in various locations and regions. Some utility companies are already unable to meet the service levels required by some data centers, regardless of price. Over-provisioning can no longer be considered a practical safety net for new deployments.



Regular evaluations of the energy situation in the data center should be a standard practice for technology planning. Holistic energy management solutions give data center managers many affordable tools for those efforts. Today’s challenge is to accurately assess technology trends before any pilot testing begins, and leverage an energy management solution that can minimize the pain points of any new technology project such as SDN.

Read more >

Bringing Conflict-Free Technology to the Enterprise

In January 2014, Intel accomplished its goal to manufacture microprocessors that are DRC conflict free for tantalum, tin, tungsten, and gold.


The journey towards reimagining the supply chain is long and arduous; it’s a large-scale, long-term commitment that demands precise strategy. For us, it was an extensive five-year plan of collecting and analyzing data, building an overarching business goal, educating and empowering supply chain partners, and implementing changes guaranteed to add business value for years to come. But we committed ourselves to these efforts because of global impact and responsibility.  As a result, the rewards have outweighed the work by leaps and bounds.


Cutting Ties with Conflict Minerals


The Democratic Republic of Congo (DRC) is the epicenter of one of the most brutal wars of our time; since 1998, 5.4 million lives have been lost to the ongoing conflict, 50 percent of which were five-years old or younger. The economy of the DRC relies heavily on the mining sector, while the rest of the world relies heavily on the DRC’s diamonds, cobalt ore, and copper. The stark reality is that the war in the Eastern Congo has been fueled by the smuggling of coltan and cassiterite (ores of tantalum and tin, respectively). Meaning most of the electronic devices we interact with on a daily basis are likely powered by conflict minerals.


One of the main reasons most are dissuaded from pursuing an initiative of this scope is that the supply chain represents one of the most decentralized units in the business. Demanding accountability from a complex system is a sizeable endeavor. Intel represents one of the first enterprise tech companies to pursue conflict-free materials, but the movement is starting to gain traction in the greater tech community as customers demand more corporate transparency.


Getting the Enterprise Behind Fair Tech


For Bas van Abel, CEO of Fairphone, there’s already a sizeable consumer demand for fair technology, but there remains a distinct need to prove that a market for fair technology exists. Fairphone is a smartphone featuring an open design built with conflict-free minerals. The company also boasts fair wages and labor practices for the supply chain workforce. When Abel crowd-funded the first prototype, his goal was to pre-sell 5,000 phones; within three weeks, he had sold 10,000. It’s only a matter of time before the awareness gains foothold and the general public starts demanding conflict-free minerals.


Screen Shot 2014-11-07 at 10.28.24 AM.png

We chose to bring the conflict-free initiative to our supply chain because funding armed groups in the DRC was no longer an option. Our hope is that other enterprises will follow suit in analyzing their own supply chains. If you want to learn more about how we embraced innovation by examining our own corporate responsibility and redefining how we build our products, you can read the full brief here.


To continue the conversation, please follow us at @IntelITCenter or use #ITCenter.

Read more >

SC14: Life Sciences Research Not Just for Workstations Anymore

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Ari E. Berman, Ph.D., Director of Government Services and Principal Investigator at BioTeam, Inc. Ari will be sharing his thoughts on high performance infrastructure and high speed data transfer during SC14 at the Intel booth (#1315) on Wednesday, Nov. 19, at 2 p.m. in the Intel Community Hub and at 3 p.m. in the Intel Theater.

There is a ton of hype these days about Big Data, both in what the term actually means, and what the implications are for reaching the point of discovery in all that data.


The biggest issue right now is the computational infrastructure needed to get to that mythical Big Data discovery place everyone talks about. Personally, I hate the term Big Data. The term “big” is very subjective and in the eye of the beholder. It might mean 3PB (petabytes) of data to one person, or 10GB (gigabytes) to someone else. ariheadshot52014-sized.jpg


From my perspective, the thing that everyone is really talking about with Big Data is the ability to take the sum total of data that’s out there for any particular subject, pool it together, and perform a meta-analysis on it to more accurately create a model that can lead to some cool discovery that could change the way we understand some topic. Those meta-analyses are truly difficult and, when you’re talking about petascale data, require serious amounts of computational infrastructure that is tuned and optimized (also known as converged) for your data workflows. Without properly converged infrastructure, most people will spend all of their time just figuring out how to store and process the data, without ever reaching any conclusions.


Which brings us to life sciences. Until recently, life sciences and biomedical research could really be done using Excel and simple computational algorithms. Laboratory instrumentation really didn’t create that much data at a time, and it could be managed with simple, desktop-class computers and everyday computational methods. Sure, the occasional group was able to create enough data that required some mathematical modeling or advanced statistical analysis or even some HPC, and molecular simulations have always required a lot of computational power. But, in the last decade or so, the pace of advancement of laboratory equipment has left large swath of overwhelmed biomedical research scientists in the wake of the amount of data being produced.


The decreased cost and increased speed of laboratory equipment, such as next-generation sequencers (NGS) and high-throughput high-resolution imaging systems, has forced researchers to become very computationally savvy very quickly. It now takes rather sophisticated HPC resources, parallel storage systems, and ultra-high speed networks to process the analytics workflows in life sciences. And, to complicate matters, these newer laboratory techniques are paving the way towards the realization of personalized medicine, which carries the same computational burden combined with the tight and highly subjective federal restrictions surrounding the privacy of personal health information (PHI).  Overcoming these challenges has been difficult, but very innovative organizations have begun to do just that.


I thought it might be useful to very briefly discuss the three major trends we see having a positive effect on life sciences research:


1. Science DMZs: There is a rather new movement towards the implementation of specialized research-only networks that prioritize fast and efficient data flow over security (while still maintaining security), also known as the Science DMZ model ( These implementations are making it easier for scientists to get around tight enterprise networking restrictions without blowing the security policies of their organizations so that scientists can move their data effectively without ******* off their compliance officers.

2. Hybrid Compute/Storage Models: There is a huge push to move towards cloud-based infrastructure, but organizations are realizing that too much persistent cloud infrastructure can be more costly in the long term than local compute. The answer is the implementation of small local compute infrastructures to handle the really hard problems and the persistent services, hybridized with public cloud infrastructures that are orchestrated to be automatically brought up when needed, and torn down when not needed; all managed by a software layer that sits in front of the backend systems. This model looks promising as the most cost-effective and flexible method that balances local hardware life-cycle issues with support personnel, as well as the dynamic needs of scientists.

3. Commodity HPC/Storage: The biggest trend in life sciences research is the push towards the use of low-cost, commodity, white box infrastructures for research needs. Life sciences has not reached the sophistication level that requires true capability supercomputing (for the most part), thus, well-engineered capacity systems built from white-box vendors provide very effective computational and storage platforms for scientists to use for their research. This approach carries a higher support burden for the organization because many of the systems don’t come pre-built or supported overall, and thus require in-house expertise that can be hard to find and expensive to retain. But, the cost balance of the support vs. the lifecycle management is worth it to most organizations.


Biomedical scientific research is the latest in the string of scientific disciplines that require very creative solutions to their data generation problems. We are at the stage now where most researchers spend a lot of their time just trying to figure out what to do with their data in the first place, rather than getting answers. However, I feel that the field is at an inflection point where discovery will start pouring out as the availability of very powerful commodity systems and reference architectures come to bear on the market. The key for life sciences HPC is the balance between effectiveness and affordability due to a significant lack of funding in the space right now, which is likely to get worse before it gets better. But, scientists are resourceful and persistent; they will usually find a way to discover because they are driven to improve the quality of life for humankind and to make personalized medicine a reality in the 21st century.


What questions about HPC do you have?

Read more >

Empowering Field Workers Through Mobility

The unique — often adverse — working conditions facing utility field workers require unique mobility solutions. Not only do workers in these roles spend the majority of their time on the road, their work often takes them to places where the weather and terrain is less than hospitable. Despite all of the challenges facing this large mobile workforce, new tablets and other mobile devices are increasing productivity and reducing downtime for workers.


Field workers need a device that supports them whether they’re on the road or in the office. A recent RCR Wireless guest blogger summed up the needs of utility field workers by comparing them to the “front lines” of an organization:


Q4-SSG-Image-2-1 (1).png

Field workers are at the front lines of customer service … and therefore need to be empowered to better serve customers. They require mobile applications that offer easier access to information that resides in corporate data centers.


Previously, this “easy access” to data centers was limited to service center offices and some mobile applications and devices. Now, however, advances in tablet technology enable workers to take a mobile office with them everywhere they go.

Tough Tablets for Tough Jobs

With numerous tablets and mobile PCs on the market, it’s difficult to determine which mobile solution provides the best experience for the unique working conditions of field workers. In order to move through their work, these users need a device that combines durability, connectivity, security, and speed.


Principled Technologies recently reviewed an Apple iPad Air, Samsung Galaxy Tab Pro 12.2, and a Motion Computing R12 to determine which tablet yields the most benefits for utility field workers. After comparing performance among the devices with regards to common scenarios field workers face on a daily basis, one tablet emerged as a clear favorite for this workforce.


While the iPad and Galaxy feature thin profiles and sleek frames, the Intel-powered Motion Computing R12 received an MIL-STD-801G impact resistance rating from the U.S. Military as well as international accreditation (IP-54 rating) for dust and water resistance. The device also hit the mark with its biometric security features and hot-swappable 8-hour battery.


Communication between utility workers and dispatching offices is often the key to a successful work day. Among the three tablets, the Motion Computing R12 was the only device able to handle a Skype call and open and edit an Excel document simultaneously. This kind of multi-tasking ability works seamlessly on this tablet because it runs Microsoft Windows 8.1 natively on a fast Intel processor and also boasts 8 GB of RAM (compared to 1 GB in the iPad and 3 GB in the Galaxy).


At the end of the day, having the right device can lead to more work orders completed and better working conditions for field workers. Empowering field workers with the right tools can remove many of the technical hurdles that they face and lead to increases in productivity and reduced inefficiencies.


To learn more about the Motion Computing R12, click here.

Read more >

Major win for global economy and technology consumers at APEC

Intel statement regarding Information Technology Agreement expansion: We applaud the work that the United States Trade Representative (USTR) has done at the Asia-Pacific Economic Cooperation (APEC) to support America’s technology industry. The breakthrough bilateral agreement between the United States and … Read more >

The post Major win for global economy and technology consumers at APEC appeared first on Policy@Intel.

Read more >

Smart Cities, Self-Driving Cars and Smart Factories: The Internet of Things “Mash-Up” Post

The Internet of Things (IoT) is fostering thousands of clean tech jobs, advanced driver assistance systems, and next-gen smart manufacturing, and Intel is helping engineers deliver innovative solutions that will lead to a better quality of life now and for … Read more >

The post Smart Cities, Self-Driving Cars and Smart Factories: The Internet of Things “Mash-Up” Post appeared first on IoT@Intel.

Read more >

Get There Faster: IoT and ECU Consolidation Drive Auto Innovation

Cars today have more computing capabilities than ever before, using dozens of interconnected electronic control units (ECUs) for enhanced vehicle management, premium entertainment, and remote telematics. ECUs provide outstanding results for consumers, but can negatively impact both space and energy … Read more >

The post Get There Faster: IoT and ECU Consolidation Drive Auto Innovation appeared first on IoT@Intel.

Read more >

Optimizing the Smart Factory with IoT and Big Data: Part 2 [Video]

In the competitive industrial and manufacturing arena, the strongest and smartest companies are seeking intelligent ways to collect, analyze, and make decisions about big data by implementing scalable Internet of Things solutions. In collaboration with National Instruments, Intel engineers are … Read more >

The post Optimizing the Smart Factory with IoT and Big Data: Part 2 [Video] appeared first on IoT@Intel.

Read more >