Recent Blog Posts

OpenStack® Kilo Release is Shaping Up to Be a Milestone for Enhanced Platform Awareness

By: Adrian Hoban

 

The performance needs of virtualized applications in the telecom network are distinctly different from those in the cloud or in the data center.  These NFV applications are implemented on a slice of a virtual server and yet need to match the performance that is delivered by a discrete appliance where the application is tightly tuned to the platform.

 

The Enhanced Platform Awareness initiative that I am a part of is a continuous program to enable fine-tuning of the platform for virtualized network functions. This is done by exposing the processor and platform capabilities through the management and orchestration layers. When a virtual network function is instantiated by an Enhanced Platform Awareness enabled orchestrator, the application requirements can be more efficiently matched with the platform capabilities.

 

Enhanced Platform Awareness is composed of several open source technologies that can be considered from the orchestration layers to be “tuning knobs” to adjust in order to meaningfully improve a range of packet-processing and application performance parameters.

 

These technologies have been developed and standardized through a two-year collaborative effort in the open source community.  We have worked with the ETSI NFV Performance Portability Working Group to refine these concepts.

 

At the same time, we have been working with developers to integrate the code into OpenStack®. Some of the features are available in the OpenStack Juno release, but I anticipate a more complete implementation will be a part of the Kilo release that is due in late April 2015.

 

How Enhanced Platform Awareness Helps NFV to Scale

In cloud environments, virtual application performance may often be increased by using a scaling out strategy such as by increasing the number of VMs the application can use. However, for virtualized telecom networks, applying a scaling out strategy to improve network performance may not achieve the desired results.

 

NFV scaling out will not ensure that improvement in all of the important aspects of the traffic characteristics (such as latency and jitter) will be achieved. And these are essential to providing the predictable service and application performance that network operators require. Using Enhanced Platform Awareness, we aim to address both performance and predictability requirements using technologies such as:

 

  • Single Root IO Virtualization (SR-IOV): SR-IOV divides a PCIe physical function into multiple virtual functions each with the capability to have their own bandwidth allocations. When virtual machines are assigned their own VF they gain a high-performance, low-latency data path to the NIC.
  • Non-Uniform Memory Architecture (NUMA): With a NUMA design, the memory allocation process for an application prioritizes the highest-performing memory, which is local to a processor core.  In the case of Enhanced Platform Awareness, OpenStack® will be able to configure VMs to use CPU cores from the same processor socket and choose the optimal socket based on the locality of the relevant NIC device that is providing the data connectivity for the VM.
  • CPU Pinning: In CPU pinning, a process or thread has an affinity configured with one or multiple cores. In a 1:1 pinning configuration between virtual CPUs and physical CPUs, some predictability is introduced into the system by preventing host and guest schedulers from moving workloads around. This facilitates other efficiencies such as improved cache hit rates.
  • Huge Page support: Provides up to 1-GB page table entry sizes to reduce I/O translation look-aside buffer (IOTLB) misses, improves networking performance, particularly for small packets.

 

A more detailed explanation of these technologies and how they work together can be found in a recently posted paper that I co-authored titled: A Path to Line-Rate-Capable NFV Deployments with Intel® Architecture and the OpenStack® Juno Release

 

 

Virtual BNG/BRAS Example

The whitepaper also has a detailed example of a simulation we conducted to demonstrate the impact of these technologies.

 

We created a VNF with the Intel® Data Plane Performance Demonstrator (DPPD) as a tool to benchmark platform performance under simulated traffic loads and to show the impact of adding Enhanced Platform Awareness technologies. The DPPD was developed to emulate many of the functions of a virtual broadband network gateway / broadband remote access server.

 

We used the Juno release of OpenStack® for the test, which was patched with huge page support. A number of manual steps were applied to simulate the capability that should be available in the Kilo release such as CPU pinning and I/O Aware NUMA scheduling.

 

The results shown in the figure below are the relative gains in data throughput as a percentage of 10Gpbs achieved through the use of these EPA technologies. Latency and packet delay variation are important characteristics for BNGs. Another study of this sample BNG includes some results related to these metrics: Network Function Virtualization: Quality of Service in Broadband Remote Access Servers with Linux* and Intel® Architecture®

 

Cumulative performance impact on Intel® Data Plane Performance Demonstrators (Intel® DPPD) from platform optimizations..PNG

Cumulative performance impact on Intel® Data Plane Performance Demonstrators (Intel® DPPD) from platform optimizations

 

 

The order in which the features were applied impacts the incremental gains so it is important to consider the results as a whole rather than infer relative value from the incremental increases. There are also a number of other procedures that you should read more about in the whitepaper.

 

The two years of hard work by the open source community has brought us to the verge of a very important and fundamental step forward for delivering carrier-class NFV performance. Be sure to check back here for more of my blogs on this topic, and you can also follow the progress of Kilo at the OpenStack Kilo Release Schedule website.

Read more >

Bring Your Own Device in EMEA – Part 2 – Finding the Balance

In my second blog focusing on Bring Your Own Device (BYOD) in EMEA I’ll be taking a look at the positives and negatives of introducing a BYOD culture into a healthcare organisation. All too often we hear of blanket bans on clinicians and administrators using their personal devices at work, but with the right security protocols in place and enhanced training there is a huge opportunity for BYOD to help solve many of the challenges facing healthcare.

 

Much of the negativity surrounding BYOD occurs because of the resulting impact to both patients (privacy) and healthcare organisations (business/financial) of data breaches in EMEA. While I’d agree that the headline numbers outlined in my first blog are alarming, they do need to be considered in the context of the size of the wider national healthcare systems.

 

A great example I’ve seen of an organisation seeking to operate a more efficient health service through the implementation of BYOD is the Madrid Community Health Department in Spain. Intel and security expert Stack Overflow assessed several mobile operating systems with a view to supporting BYOD for physicians in hospitals within their organisation. I highly recommend you read more about how Madrid Community Health Department is managing mobile with Microsoft Windows-based tablets.

 

 

The Upside of BYOD

There’s no doubt that BYOD is a fantastic enabler in modern healthcare systems. But why? We’ll look at some best practice tips in a later blog but suffice to say here that much of the list below should be underpinned by a robust but flexible BYOD policy, an enhanced level of staff training, and a holistic and multi-layered approach to security.

 

1) Reduces Cost of IT

Perhaps the most obvious benefit to healthcare organisations is a reduction in the cost of purchasing IT equipment. Not only that, it’s likely that employees will take greater care of their own devices than they would of a corporate device, thus reducing wastage and replacement costs.

 

2) Upgrade and Update

Product refresh rates are likely to be more rapid for personal devices, enabling employees to take advantage of the latest technologies such as enhanced encryption and improved processing power. And with personal devices we also expect individuals to update software/apps more regularly, ensuring that the latest security updates are installed.

 

3) Knowledge & Understanding

Training employees on new devices or software can be costly and a significant drain on time, notwithstanding being able to schedule in time with busy clinicians and healthcare administrators. I believe that allowing employees to use their personal everyday device, with which they are familiar, reduces the need for device-level training.  There may still be a requirement to have app-level training but that very much depends on the intuitiveness of the apps/services being used.

 

4) More Mobile Workforce

The holy grail of a modern healthcare organisation – a truly mobile workforce. My points above all lead to clinicians and administrators being equipped with the latest mobile technology to be able to work anytime and anywhere to deliver a fantastic patient experience.

 

 

The Downside of BYOD

As I’ve mentioned previously, much of the comment around BYOD is negative and very much driven by headline news of medical records lost or stolen, the ensuing privacy ramifications and significant fines for healthcare organisations following a data breach.

 

It would be remiss of me to ignore the flip-side of the BYOD story but I would hasten to add that much of the risk associated with the list below can be mitigated with a multi-layered approach that not only combines multiple technical safeguards but also recognises the need to apply these with a holistic approach including administrative safeguards such as policy, training, audit and compliance, as well as physical safeguards such as locks and secure use, transport and storage.


1)  Encourages a laissez-faire approach to security

We’ve all heard the phrase ‘familiarity breeds contempt’ and there’s a good argument to apply this to BYOD in healthcare. It’s all too easy for employees to use some of the same workarounds used in their personal life when it comes to handling sensitive health data on their personal device. The most obvious example is sharing via the multitude of wireless options available today.


2) Unauthorised sharing of information

Data held at rest on a personal devices is at a high risk of loss or theft and is consequently also at high risk of unauthorized access or breach. Consumers are increasingly adopting cloud services to store personal information including photos and documents.

 

When a clinician or healthcare administrator is in a pressured working situation with their focus primarily on the care of the patient there is a temptation to use a workaround – the most obvious being the use of a familiar and personal cloud-based file sharing service to transmit data. In most cases this is a breach of BYOD and wider data protection policies, and increases risk to the confidentiality of sensitive healthcare data.


3) Loss of Devices

The loss of a personal mobile device can be distressing for the owner but it’s likely that they’ll simply upgrade or purchase a new model. Loss of personal data is quickly forgotten but loss of healthcare data on a personal device can have far-reaching and costly consequences both for patients whose privacy is compromised and for the healthcare organisation employer of the healthcare worker. An effective BYOD policy should explicitly deal with loss of devices used by healthcare employees and their responsibilities in terms of securing such devices, responsible use, and timely reporting in the event of loss or theft of such devices.


4) Integration / Compatibility

I speak regularly with healthcare organisations and I know that IT managers see BYOD as a mixed blessing. On the one hand the cost-savings can be tremendous but on the other they are often left with having to integrate multiple devices and OS into the corporate IT environment. What I often see is a fragmented BYOD policy which excludes certain devices and OS, leaving some employees disgruntled and feeling left out. A side-effect of this is that it can lead to sharing of devices which can compromise audit and compliance controls and also brings us back to point 2 above.

 

These are just some of the positives and negatives around implementing BYOD in a healthcare setting. I firmly sit on the positive side of the fence when it comes to BYOD and here at Intel Security we have solutions to help you overcome the challenges in your organisation, such as Multi-Factor Authentication (MFA) and SSDs Solid State Drives including in-built encryption which complement the administrative and physical safeguards you use in your holistic approach to managing risk.

 

Don’t forget to check out the great example from the Madrid Community Health Department to see how our work is having a positive impact on healthcare in Spain. We’d love to hear your own views on BYOD so do leave us a comment below or if you have a question I’d be happy to answer it.

 

 

David Houlding, MSc, CISSP, CIPP is a Healthcare Privacy and Security lead at Intel and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Read more >

Tackling Information Overload in Industrial IoT Environments

Feeling inundated by too much industrial IoT data? Well, you’re not alone. According to an Economist Intelligence Unit report, most manufacturers are experiencing  information overload due to the increasing volume of data generated by automated processes. Senior factory executives in … Read more >

The post Tackling Information Overload in Industrial IoT Environments appeared first on IoT@Intel.

Read more >

Ready, Set, Action. Enhanced Platform Awareness in OpenStack for Line Rate NFV

By: Frank Schapfel

 

One of challenges in deploying Network Functions Virtualization (NFV) is creating the right software management of the virtualized network.  There are differences between managing an IT Cloud and a Telco Cloud.  IT Cloud providers take advantage of centralized and standardized servers in large scale data centers.  IT Cloud architects aim to maximize the utilization (efficiency) of the servers and automate the operations management.  In contrast, Telco Cloud application workloads are different from IT Cloud workloads.  Telco Cloud application workloads have real-time constraints, government regulatory constraints, and network setup and teardown constraints.  New tools are needed to build a Telco Cloud to these requirements.

 

OpenStack is the open software community developing IT Cloud orchestration management since 2010.  The Telco service provider community of end users, telecomm equipment manufacturers (TEMs), and software vendors have rallied around adapting the OpenStack cloud orchestration for Telco Cloud.  Over the last few releases of OpenStack, the industry has been shaping and delivering Telco Cloud ready solutions. For now, let’s just focus on the real-time constraints. For IT Cloud, the data center is viewed as a large pool of compute resources that need to operate a maximum utilization, even to the point of over-subscription of the server resources. Waiting a few milliseconds is imperceptible to the end user.  On the other hand, a network is real-time sensitive – and therefore cannot tolerate over-subscription of resources.

 

To adapt OpenStack to be more Telco Cloud friendly, Intel contributed to the concept of “Enhanced Platform Awareness” to OpenStack. Enhanced Platform Awareness in OpenStack offers a fine-grained matching of virtualized network resources to the server platform capabilities.  Having a fine-grained view of the server platform allows the orchestration to accurately assign the Telco Cloud application workload to the best virtual resource.  The orchestrator needs NUMA (Non-Uniform Memory Architecture) awareness so that it can understand how the server resources are partitioned, and how CPUs, IO devices, and memory are attached to sockets.  For instance, when workloads need line rate bandwidth, high speed memory access is critical, and huge page access is the latest technology in the latest Intel® Xeon™ E5-2600 v3 processor.

 

Now in action at the Oracle Industry Connect event in Washington, DC, Oracle and Intel demonstrate the collaboration using Enhanced Platform Awareness in OpenStack.  The Oracle Communications Network Service Orchestration uses OpenStack Enhanced Platform Awareness to achieve carrier grade performance for Telco Cloud. Virtualized Network Functions are assigned based on the needs for huge page access and NUMA awareness.  Other cloud workloads, which are not network functions, are not assigned specific server resources.

 

The good news – the Enhanced Platform Awareness contributions are already up-streamed in the OpenStack repository, and will be in the OpenStack Kilo release later this year.  At Oracle Industry Connect this week, there is a keynote, panel discussions and demos to get even further “under the hood.”  And if you want even more details, there is a new Intel White Paper: A Path to Line-Rate-Capable NFV Deployments with Intel® Architecture and the OpenStack® Juno Release.

 

Adapting OpenStack for Telco Cloud is happening now. And Enhanced Platform Awareness is finding its way into a real, carrier-grade orchestration solution.

Read more >

How can you afford to NOT use SSDs?

“Intel SSDs are too expensive!”

“The performance of an SSD won’t be noticed by my users.”

“Intel SSDs will wear out too fast!”

“I don’t have time to learn about deploying SSDs!”

 

I’ve heard statements like this for years, and do I ever have a story to share – the story of Intel’s adoption of Intel® Solid-State Drives (Intel® SSDs).

 

Before I tell you more, I would like to introduce myself.  I am currently a Client SSD Solutions Architect in Intel’s Non-Volatile Memory Solutions Group (the SSD group).   Prior to joining this group last year, I was in Information Technology (IT) at Intel for 26 years.  The last seven years in IT were spent in a client research and pathfinding role where I investigated new technologies and how they could be applied inside of Intel to improve employee productivity.

 

I can still remember the day in late 2007 when I first plugged in an Intel SSD into my laptop.  I giggled.  A lot.  And that’s what sparked my passion for SSDs.  I completed many lab tests, research efforts and pilot deployments in my role, which led to the mainstream adoption of Intel SSDs within Intel.  That’s the short version.  More detail is documented in a series of white papers published through our IT@Intel Program.  If you’d like to read more about our SSD adoption journey, here are the papers:

 

 

I’ve answered many technical and business-related questions related to SSDs over the years.  Questions, and assumptions, like the four at the top of this blog, and perhaps one hundred others.  But the question I’ve been asked more than any other is, “how can you afford to deploy SSDs when they cost so much compared to hard drives?”  I won’t go in to the detail in this introductory blog, but I will give you a hint, point you to our Total Cost of Ownership estimator and ask, “how can you afford to NOT use SSDs?”

 

I plan to cover a variety of client SSD topics in future blogs.  I have a lot of info that I would like to share about the adoption of SSDs within Intel, and about the technology and products in general.  If you are interested in a specific topic, please make a suggestion and I will use your input to guide future blogs.

 

Thanks for your time!

 

Doug
intel.com/ssd

Read more >

Keeping it Simple in Healthcare Cybersecurity

“Any fool can make something complicated. It takes a genius to make it simple.” – Woody Guthrie, musician

 

The proliferation of electronic systems and devices in healthcare is a good example of the tendency of systems to increase in complexity over time, and the complexity has taken its toll on our ability to adequately secure data. In 2014, the number of people in California alone whose electronic protected health information (ePHI) was exposed by a breach had increased 600 percent. The national cost of recovering from a breach averaged $5.4 million, not including the harm from loss of consumer trust. FrankNegro-Dell.jpg

 

With so much at risk, security is no longer just an IT issue; it is a significant business and operational concern. The growing complexity of healthcare IT demands a simpler approach that will enable organizations to address security realistically. As Harvard surgeon Atul Gawande explained in his 2007 book The Checklist Manifesto, a checklist can help people simplify the steps in a complex procedure, like the one he used to reduce central line infections at Johns Hopkins University. His simple, five-step checklist for central line insertion, including the enforcement and monitoring of hand washing, helped prevent 43 infections and 8 ICU deaths, saving the hospital $2 million. Enforcement and monitoring of hand washing significantly increased compliance of basic hygiene and was important in reducing infection rates.

 

Use checklists

If healthcare organizations used a checklist of basic security hygiene, similar to the one Gawande wrote about, many breaches of privacy could be avoided. But, like enforcement of hand washing, which is both cheap and effective at preventing infection, healthcare organizations often neglect the bedrock of a good security posture: encryption, identity and access management platforms, risk analyses, and breach remediation and response plans.

 

While organizations understand that these activities are important, many lack operational follow-through. For example, less than 60 percent of providers have completed a risk assessment on their newest connected and integrated technologies, and only 30 percent are confident that their business associates can detect patient data loss or theft or perform a risk assessment. Barely 75 percent of providers use any form of encryption, despite the fact that it confers immunity from the requirement to report ePHI breaches. And according to Dell’s 2014 Global Technology Adoption Index, only one in four organizations surveyed actually has a plan in place for all types of security breaches. Many healthcare organizations are just as vulnerable as Community Health Systems was in early 2014, or insurer Anthem was at the beginning of 2015.

 

In the face of multiple incentives to encrypt data and manage authorizations and data access, why do so many organizations ignore these most of basic of measures?

 

The answer is complexity. In a 2010 survey, IBM’s Institute for Business Value identified “the rapid escalation of complexity” as a top challenge for CEOs, and most of those polled did not feel felt adequately ready to confront this complexity. To better manage the chaos, healthcare CIOs can look to their own clinical departments for examples of significant quality improvements achieved by establishing a checklist of behaviors and making people accountable for sticking to the list. The Royal Australian College of General Practitioners (RACGP), for instance, has adopted a 12-point framework to help physician practices assess their security and comply with security best practices. These guidelines are tightly integrated into areas such as process development, risk analysis, governance and building a culture of security.

 

Simplified playbook

Dell security experts have also written recently on the importance of a simplified playbook approach to security, focusing on four areas: (1) preventing, (2) detecting, (3) containing, and (4) eradicating breaches. By implementing a framework based on these four simple principles, healthcare organizations can not only address the technical and hardware components of security, but also address the “human element” that is responsible for many breaches, including human error and malicious insiders. Within these four strategic areas of focus, healthcare organizations can incorporate checklists of the core tactics that will support those areas. For instance, many of the activities in this process will take place to prevent a breach in the first place, and should limit employee negligence. Thus, to prevent a breach, a checklist similar to the following should be implemented, depending on the organization’s unique needs:

 

1. Automatically encrypt all protected data from point of creation, and as it moves, including movement into the cloud.

2. Implement an effective identity and access management solution. Include clear direction on access rights, password maintenance and management, remote access controls, and auditing and appropriate software configuration.

3. Regularly assess security risks, using a framework such as NIST, and include threat analysis, reporting schedule and data breach recording procedures.  Ensure risk remediation efforts have a high priority.

4. Ensure the education of staff on security “hand washing” behaviors, including password, internet and email usage practices.

5. Monitor to detect threats in real-time.

 

Similar checklists can also be created for the other three areas mentioned above. Healthcare organizations can simplify even further by vertically integrating security add-ons and centralizing and hardening security into the IT infrastructure. This includes embedding security in firewalls, servers and data centers; integrating secure messaging with next generation firewalls; and encrypting data automatically as it scales and moves into the cloud.

 

We can improve healthcare cybersecurity by focusing on a checklist of simple practices that have the greatest impact. And simplicity, Leonardo da Vinci once stated, “Is the ultimate sophistication.”

 

What questions about cybersecurity do you have?

 

Join Dell and Intel at HIMSS booth #955 on April 14 at 11 am CT for an interactive tweet-up discussing relevant topics in healthcare security. Register for this exclusive event here.


Frank Negro is Global Practice Leader, Strategy and Planning, Healthcare and Life Sciences Solutions at Dell Services

Read more >

Synchrophasors: An Opportunity Wrapped in an Enigma

The Western Interconnection’s shiny new synchrophasor grid – 450+ shoebox-sized phasor measurement units (PMUs), each giving system operators 30 to 60 snapshots of voltage, current, angle and frequency per second – has radically transformed system operators’ ability to understand real-time … Read more >

The post Synchrophasors: An Opportunity Wrapped in an Enigma appeared first on Energy.

Read more >

Optimize Your Infrastructure with the Intel Xeon D

intel-xeon-processor-d-product-family-1.jpgIn case you missed it, we just celebrated the launch of the Intel Xeon processor D product family. And if you did miss it, I’m here to give you all the highlights of an exciting leap in enterprise infrastructure optimization, from the data center to the network edge.

 

The Xeon D family is Intel’s 3rd generation 64-bit SoC and the first based on Intel Xeon processor technology. The Xeon D weaves the performance of Xeon processors into a dense, lower-power system-on-a-chip (SoC). It suits a unique variety of use cases, ranging from dynamic web serving and dedicated web hosting, to warm storage and network routing.

 

Secure, Scalable Storage

 

The Xeon D’s low energy consumption and extremely high performance make it a cost-effective, scalable solution for organizations looking to take their data centers to the next level. By dramatically reducing heat and electricity usage, this product family offers an unrivaled low-powered solution for enterprise server environments.

 

Server systems powered by the new Intel Xeon D processors offer fault-tolerant, stable storage platforms that lend themselves well to the scalability and speed clients demand. Large enterprises looking for low-power, high-density server processors for their data stacks should keep an eye on the Xeon D family, as these processors offer solid performance per watt and unparalleled security baked right into the hardware.

 

Cloud Service Providers Take Note

 

intel-xeon-d-processor-family.jpg1&1, Europe’s leading web hosting service, recently analyzed Intel’s new Xeon D processor family for different cloud workloads such as storage or dedicated hosting. The best-in-class service utilizes these new processors to offer both savings and stability to their customers. According to 1&1’s Hans Nijholt, the technology has a serious advantage for enterprise storage companies as well as SMB customers looking to pass on savings to customers:

 

“The [Xeon D’s] energy consumption is extremely low and it gives us very high performance. Xeon D has a 4x improvement in memory and lets us get a much higher density in our data center, combined with the best price/performance ratio you can offer.”

 

If you’re looking to bypass existing physical limitations, sometimes it’s simply a matter of taking a step back, examining your environment, and understanding that you have options outside expansion. The Xeon D is ready to change your business — are you ready for the transformation?

 

We’ll be revealing more about the Xeon D at World Hosting Days; join us as we continue to unveil the exciting capabilities of our latest addition to the Xeon family!

 

If you’re interested in learning more about what I’ve discussed in this blog, tune in to the festivities and highlights from CeBit 2015.

 

To continue this conversation, connect with me on LinkedIn or use #ITCenter.

Read more >

Intel Was Doing Software-Defined Infrastructure Before It Was Cool

Following Intel’s lead – decoupling software from hardware and automating IT and business processes — can help IT departments do more with less.

 

When I think back to all the strategic decisions that Intel IT has made over the last two decades, I can think of one that set the stage for all the rest: our move in 1999 from RISC-based computing systems to industry-standard Intel® architecture and Linux for our silicon design workloads. That transition, which took place over a 5-year period, helped us more than double our performance while eliminating approximately $1.4 billion in IT costs.

 

While this may seem like old news, it really was the first step in developing a software-defined infrastructure (SDI) – before it was known as such – at Intel. We solidified our compute platform with the right mix of software on the best hardware to get our products out on time.

 

Today, SDI has become a data center buzzword and is considered one of the critical next steps for the IT industry as a whole.

StorageCapacity.png


Why is SDI (compute, storage, and network) so important?

 

SDI is the only thing that is going to enable enterprise data centers to meet spending constraints, maximize infrastructure utilization, and keep up with demand that increases dramatically every year.

 

Here at Intel, compute demand is growing at around 30 percent year-over-year. And as you can see from the graphic, our storage demand is also growing at a phenomenal rate.

 

But our budget remains flat or has even decreased in some cases.

 

Somehow, we have to deliver ever-increasing services without increasing cost.


What’s the key?

 

Success lies in decoupling hardware and software.

 

As I mentioned, Intel decoupled hardware and software in our compute environment nearly 16 years ago, replacing costly proprietary solutions that tightly coupled hardware and software with industry-standard x86 servers and the open source Linux operating system. We deployed powerful, performance-optimized Intel® Xeon® processor-based servers for delivering throughput computing. We followed this by adding performance-centric higher-clock, higher-density Intel Xeon processor-based servers to accelerate silicon design TTM (time to market) while significantly reducing EDA  (Electronic Design Automation) application license cost — all of which resulted in software-defined compute capabilities that were powerful but affordable.

 

Technology has been continuously evolving, enabling us to bring a similar level of performance, availability, scalability, and functionality with open source, software-based solutions on x86-based hardware to our storage and network environments.

 

Screen Shot 2015-03-20 at 12.45.32 PM.pngAs we describe in a new white paper, Intel IT is continuously progressing and transforming Intel’s storage and network environments from proprietary fixed-function solutions to standard, agile, and cost-effective systems.

 

We are currently piloting software-defined storage and identifying quality gaps to improve the capability for end-to-end deployment for business critical use.

 

We transitioned our network from proprietary to commodity hardware resulting in more than a 50-percent reduction in cost. We are also working with the industry to adopt and certify an open-source-based network software solution that we anticipate will drive down per-port cost by an additional 50 percent. Our software-defined network deployment is limited to a narrow virtualized environment within our Office and Enterprise private cloud.


But that’s not enough…

 

Although decoupling hardware and software is a key aspect of building SDI, we must do more. Our SDI vision, which began many years ago, includes automated orchestration of the data center infrastructure resources. We have already automated resource management and federation at the global data center level. Our goal is total automation of IT and business processes, to support on-demand, self-service provisioning, monitoring, and management of the entire compute/network/storage infrastructure. Automation will ensure that when a workload demand occurs, it lands on the right-sized compute and storage so that the application can perform at the needed level of quality of service without wasting resources.


Lower cost, greater relevancy

 

Public clouds have achieved great economy of scale by adopting open-standard-based hardware, operating systems, and resource provisioning and orchestration software through which they can deliver cost-effective capabilities to the consumers of IT. If enterprise IT wants to stay relevant, we need to compete at a price point and agility similar to the public cloud. SDI lets IT compete while maintaining a focus on our clients’ business needs.

 

As Intel IT continues its journey toward end-to-end SDI, we will share our innovations and learnings with the rest of the IT industry — and we want to hear about yours, too! Together, we can not only stay relevant to our individual institutions, but also contribute to the maturity of the data center industry.

Read more >