Recent Blog Posts

How to Accelerate the Move to Mainstream NFV Deployment

By John Healy, General Manager, Software Defined Networking Division, Network Platforms Group, Intel

Mobile World Congress is upon us and there is plenty of buzz again about the progress of network functions virtualization (NFV). I’m looking forward many new NFV demos, product announcements and presentations on how mobile operators are solving problems using the technology.

I’m very bullish on the future of the NFV market. In this last year, the industry has successfully passed from a normative phase where specifications and use cases were determined, applications developed and proofs of concept and demos were successfully conducted.

Now we are moving into the next phase where NFV applications move into operation in production networks.  I am excited at the progress that our partners have achieved in translating trials into deployments and the benefits that they are beginning to achieve and measure.

But at the same time, I realize that as an industry there is still significant work to do to accelerate the technology to a point where carriers can consider full deployment and scaled implementations.  I believe there are two significant themes that need to be addressed in the coming year.


Challenge 1 – Technology Maturity

There have been plenty of successful NFV demos over the last 18 months proving the capability of virtualized services and the performance of standards-based computing platforms.  Now, we need to achieve mass scale and ruggedized implementations and for that the various building block technologies need to be hardened and matured.

Through this work the many virtual network functions (VNFs) will be “ruggedized” in order to provide the same service and reliability levels as today’s fixed-function counterparts.  This need for “carrier-grade reliability“ is the necessary maturing that will occur.

Much of this ruggedization will happen as operators test these VNFs in practical demonstrations; those that feature traffic types, patterns and volumes found in production networks.  Several announcements at MWC have highlighted the deployments into live networks that mark this new phase. We are actively involved in this critical activity with our partners and their customers.

But there’s also a need for more orchestration functionality to be developed and proven so that service providers can scale their networks through the automation of the implementation composition of network functions and services.

The intelligent placement of network functions mapped to the best capabilities of the computing platforms enables network services orchestration (NSO) to achieve the best performance. Exciting demos of this NSO in practice in a multi-vendor environment are on show at MWC.

Many of our ecosystem partners are tackling the orchestration of lower-level functions such as inventory, security, faults, application settings and other infrastructure elements following the ETSI management and network operations (MANO) model.  Others still have focused on the service orchestration based on models of the networks resources and policy definition schemes.

The open source community is also a key enabler of the maturing phase, including projects such as the Nova and Neutron software developments that are building orchestration functionality into OpenStack. The Open Platform for NFV (OPNFV) project is focused on hardening the NFV infrastructure and improving infrastructure management, which should improve the performance predictability of NFV services.

All of these initiatives are important and must be tested through implementation into carrier networks and stressed so that operators can be confident that services will perform predictably.

I’ve seen this performance evolution take place at Intel as we tackled the challenge of consolidating multiple processing workloads on our general purpose Intel Architecture CPUs while growing performance for packet processing to enable replacement of fixed function packet processors.

In the mid-2000s – packet processing performance on Intel processors was not where we wanted it to be, so we made modifications to the microarchitecture and at the same time we developed a series of acceleration libraries and algorithms that became the Data Plane Development Kit.

After several product generations, we can now provide wire-speed packet processing performance delivering 160Gbps of layer-three forwarding on a single core. This is made possible through our innovations and through deep collaborations with our partners, a concept we have extended to the world of NFV and from which many of the announcements at MWC have originated.


Challenge 2 – Interoperability

Interoperability on a grand scale is what will make widespread NFV possible. That means specification, standardization and interoperability are a major requirement for this phase of NFV.

The Open source dimension to NFV creates the community driven and supported approach that speeds innovation but it needs to be married to the world of specification definition and standardization that has traditionally moved at a much slower pace. Too slow for the new world that NFV enables.

This is a significant opportunity and challenge for the industry – we need to collectively find the bridge between both worlds. This is new territory for many of the parties involved and many of the projects are just starting on the path.


Intel’s Four Phase Approach to NFV

Intel is leading efforts to accelerate the maturity of the NFV market and we have outlined four key ways to do that.

First, we’re very active in developing and promoting open source components and standards. We are doing this by contributing engineering and management talent and our own technology to open source efforts. The goal is to ensure that standards evolve in an open and interoperable way.

Next, we have developed the Open Network Platform to integrate open source and Intel technologies into a set of server and networking reference designs that VNF developers can use to shorten their time to market.

Working with the industry is important, which is why we have developed Intel Network Builders, a very active ecosystem of ISVs, hardware vendors, operating system vendors and VNF developers. Network Builders gives these companies opportunities to work together and with Intel, and gives operators and others in the industry a place to find solutions and keep a pulse on the industry.

And lastly, we are working closely with service providers to support them in converting POCs into full deployments in their networks. It was at last year’s MWC that Telefonica announced its virtual CPE implementation, which Intel contributed to, and this year there are several more and we have many other similar projects that we’re working on now.

While these engineering challenges are significant, they are the growing pains that NFV must pass through to be a mature and tested solution. The key will be to keep openness and interoperability at the forefront and to keep the testing and development programs active so that they can scale to meet the needs of today’s carriers. If MWC is an indicator of the future it is definitely very bright.

Read more >

Enabling Real-Time Apps: Supporting Open Source Software: Intel Open Network Platform Server Release 1.3

By Dana Nehama, Sr. Product Marketing Manager, Network Platforms Group (NPG), Intel

It’s a busy time for the Intel Open Network Platform Server team and our Intel Network Builder partners. This week at Mobile World Congress in Barcelona, there are no less than six SDN/NFV demos that are based on Intel ONP Server and are developed by our Intel Network Builder ecosystem partners. Back home, we are releasing Intel ONP Server release 1.3 with updates to the open source software as well as the addition of real-time Linux kernel support and 40GbE NIC support.

The Intel ONP Server is a reference-architecture that brings together hardware and open source software building blocks used in SDN/NFV. It helps drive development of optimized SDN/NFV products in telecom, cloud and enterprise IT markets

The MWC demos illustrate this perfectly as they all involve Intel Network Builders partners showcasing cutting-edge SDN/NFV solutions.

The ONP software stack comprises Intel and community-developed open source released software such as Fedora Linux, DPDK, Open vSwitch, OpenStack, OpenDaylight and others. The key is that we address the integration gap of multiple open source projects and bring it all together into a single software release.

Here’s what’s in release 1.3:

  • OpenStack Juno 2014.2.2 release
  • OpenDaylight Helium.1 release
  • Open vSwitch 2.3.90 release
  • DPDK 1.7.1 release
  • Fedora 21 release
  • Real-Time Linux Kernel
  • Integration with 4×10 Gigabit Intel® Ethernet Controller XL710 (Fortville)
  • Validation with a server platform that incorporates the Intel® Xeon® Processor E5-2600 v3 product family

Developers who go to to generate the software will see the value of this bundle because it all works together.  In addition, the reference architecture guide available on is a “cook book” that provides guidelines on how to test ONP servers or build products that are based on Intel ONP Server software and hardware ingredients.

A first for this release is the support of Real-Time Linux Kernel, which makes ONP Server an option for new applications.

Another important aspect to the new release is the support for the 4x10GbE Intel Ethernet Controller XL710. This adapter delivers high performance with low power consumption. For applications like a vEPC, having the data throughput of the XL710 is a significant advance.

If you are an NFV / SDN developer who wants to get to market quickly, I hope you will take a closer look at the latest release of ONP Server and consider it as a reference for your NFV/SDN development.

If you can’t make it to Barcelona to see the demos, you can find more information at: or at

Read more >

Accelerating Network Transformation via the Ecosystem

Renu Navale, Director of Intel Network Builders Program, Network Platforms Group, Intel

As a die-hard Carl Sagan fan, I love his quote – “Imagination will often carry us to worlds that never were, but without it we go nowhere.” There was a lot of imagination and strategic vision behind the beginnings of network function virtualization (NFV) and software defined networking (SDN). Now network transformation is an unstoppable force that has encompassed an entire industry ecosystem. The need for services agility, reduction in operational and capital expenses and the rapid growth in the Internet of Things is driving a transformation of network infrastructure. Both telco and cloud service providers aim to accelerate delivery of new services and capabilities for consumers and businesses, improve their operational efficiencies, and use cloud computing to meet their customers’ demand for more connectivity and delivery of real-time data.



With proven server, cloud, and virtualization technologies, Intel is an excellent position to apply these same technologies to the network infrastructure. Intel is working closely with the industry to drive this transformation by offering building blocks of standardized hardware and software, as well as server reference designs with supporting software, that address the performance, power, and security needs of the industry.  Intel also actively participates in open source and open standards development, invests in building strong ecosystems, and brings a breadth of experience in enterprise and cloud computing innovation.

Execution is an integral facet of any strategy. I consider the Intel Network Builders program part of the required execution for Intel’s NFV and SDN strategy. First – what is the Intel Network Builders program? It is an Intel led initiative to work with the larger industry ecosystem to accelerate network transformation on Intel architecture, products and technologies. Since the inception of the Intel Network Builders program, our ecosystem of partner companies has seen tremendous growth. We now have about 130 members who are hardware and software vendors, system integrators, and equipment manufacturers. The key value proposition for the members is increased visibility and market awareness, technology enabling via POCs and reference architectures using Intel products and ingredients, and increased business opportunities via various tools, workshops, and summits.

The tremendous increase in membership over this past year has resulted in the upgrade of our website and other engagement tools to meet our ecosystem partners’ needs. Most recently, we have launched a revamped member portal, where Intel Network Builders members have the opportunity to directly engage with one another, foster new business relationships, learn about upcoming events and webinars, and highlight their solutions to other community members. If you are already an Intel Network Builders ecosystem partner, you are invited to start engaging with us today, and if you are in the industry seeking resources and general news, please check out our site at




It takes a whole village to raise a child. In a similar manner, it will take the whole networking industry ecosystem to accomplish this transformation. Hence Intel Network Builders as a program to connect and collaborate with the ecosystem is absolutely essential to deliver on the promise of NFV and SDN. I am in the midst of this amazing transformation. There are moments, as when writing this blog, that I am humbled to be part of this journey and transformation.

I hope to see you in Barcelona!

Read more >

High Performance Packet Processing in the NFV World

Network transformation is taking off like a rocket … with the SDN, NFV, and network virtualization market accounting for nearly $10 Billion (USD) in 2015, according to SNS Research.(1) This momentum will take front stage this week at Mobile World Congress (MWC) 2015, including dozens of solutions and demos that spotlight Intel technology.


New Ways to Speed up Packet Processing

Packet processing workloads are continuously evolving and becoming more complex, as seen by progressing SDN/Network-overlay standards and signature-based DPI, just to name a few examples. One requires highly flexible software and silicon ingredients to deliver cost-effective solutions to cater to these workloads. NFV solutions are all judged on how fast they can move packets on virtualized, general-purpose hardware. This is why the Data Plane Development Kit (DPDK) is seen as a critical capability, delivering packet processing performance improvements in the range of 25 to 50 times( 2, 3) on Intel® processors.

Building upon the DPDK, Intel will demonstrate at MWC how equipment manufacturers can boost performance further while making NFV more reliable. One way is to greatly reduce cache trashing by pinning L3 cache memory to high-priority applications using Intel Cache Allocation Technology. Another is to use a DPDK-based pipeline to process packets instead of distributing the load across multiples cores, which can result in bottlenecks if the flows cannot be uniformly distributed.


Intel Cache Allocation Technology

It’s no secret that virtualization inherently introduces overheads that lead to some level of application performance degradation compared to a non-virtualized environment. Most are aware of the more obvious speed bumps, like virtual machine (VM) enters/exits and memory address translations.

A lesser known performance degrader is caused by various VMs competing for the same cache space, called cache contention. When the hypervisor switches context to a VM that is a cache hog, cache entries for the other VMs get evicted, only to be reloaded when those VMs start up again. This can result in an endless cycle of cache reloads that can cut performance in half, as shown in the figure. (2, 3)



DPDK MWC Blog Graphic.jpg



On the left side, the guest VM implementing a three-stage packet processing pipeline (classify, L3 forward, and traffic shaper) has the L3 cache to itself, so it can forward packets at 11 Mpps. The middle pane introduces an aggressor VM that consumes more than half the cache, and the throughput of the guest VM drops to 4 Mpps. The right side implements Intel Cache Allocation Technology, which pins the majority of the cache to the guest VM, thus restoring the packet forwarding throughput to 11 Mpps. (2, 3)


IP Pipeline Using DPDK

There are two common models for processing packets on multi-core platforms:


  • Run-to-completion: A distributor divides incoming traffic flows among multiple processor cores, each processing their assigned flows to completion.
  • Pipeline: All traffic is processed by a pipeline constructed of several processor cores, each performing a different packet processing function in series.

At MWC 2015, Intel will have a live demonstration of high-performance NFV running on industry standard high volume server, where copies of packet processing pipelines are implemented in multiple VMs, and the performance of these VMs is governed using state-of-the-art Cache Monitoring and Allocation Technologies.

Want to know more? Get more information on Intel in Packet Processing.


Are you at MWC 2015?

Check out the high-performance NFV demo at the Intel Booth and see the new Intel technologies developed to drive even higher levels of performance in SDN and NFV! Visit us at MWC 2015 – App Planet, hall 8.1, stand #8.1E41.





1 Source: PR Newswire, “The SDN, NFV & Network Virtualization Bible: 2015 – 2020 – Opportunities, Challenges, Strategies & Forecasts.” Nov 27, 2014,–network-virtualization-bible-2015–2020–opportunities-challenges-strategies–forecasts-300002078.html.


2 Performance estimates are based on L2/L3 packet forwarding measurements.


3 Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel® products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, visit Intel Performance Benchmark Limitations.

Read more >

Fueling the Next Billion Connected Devices: MWC Day 2 Recap

The excitement this week in Barcelona would make you think that Messi is in town for a match against computing and the networks that feed the billions of devices that dot our globe.  Mobile World Congress is in full photo 1.JPGswing with the who’s who of the tech industry sharing their latest wares and meeting to discuss the next generation of innovation.

I cannot underscore how struck I’ve been at the rate of telco equipment industry innovation at MWC.  It was only two years ago that I attended MWC and learned about the new NFV specifications moving through ETSI, and today I was fortunate to hear from network leaders Openet, Procera Networks, and Amartus on real solutions for telco billing solutions based on NFV powered service delivery.  This solution is a microcosm of the networking landscape today as groups of companies work together to deliver application, orchestration and infrastructure solutions together to solve point business challenges, in this case innovating billing solutions that historically were designed for voice only accounts.  With new NFV based solutions, telco operators will be better able to accurately bill for different types of data consumption along with voice usage and more rapidly deploy solutions to market.  Martin Morgan, VP of Marketing at Openet stated that initial solutions are already being deployed by select customers with range of scale from 50K to 50M customer bases.

Sandra Rivera, Intel’s VP and GM of the Network Platform Group, called out this type of ecosystem collaboration at the core of Intel’s heritage.  Her group’s Network Builder’s program has grown from 30 to 125 vendors in the 18 months since its inception and has begun adding telco operators such as Telefonica and Korea Telecom to its member roles.  Sandra explained that collaboration between providers and operators will help accelerate adoption of NFV solutions to the marketplace as providers can prioritize focus on use cases that provide the best opportunity for financial reward and operators can more quickly evaluate solutions coming to market.  She highlighted shepherding this broad collaboration as critical to Intel’s efforts in driving NFV adoption in 2015, and given the momentum behind the effort there’s little reason to expect anything other than continued growth in POC results and deployments in 2015.  To keep track of the latest developments in network ecosystem innovation visit the Intel Network Builders site.

photo 2.JPG
A blog about MWC would not be complete without mention of mobile device innovation, and one topic that has risen to the surface once again this year is the focus on mobile security.  I was fortunate to chat with executives from the Intel Security group to get the latest on Intel’s security solutions.  Mark Hocking, VP & GM of Safe Identity and Security at Intel Security discussed Intel’s latest innovation, TrueKey.  This cool technology enables a central resource for password management integrating facial recognition, biometrics, encryption technologies, and physical password entry to make the management of passwords manageable and more secure for the user.  I have to admit that as a person who has invented at least 50 ways to describe my dog to form different iterations of the seemingly endless permutations of passwords required to navigate today’s web, I was delighted to learn that soon simply smiling at my PC would provide a baseline of secure engagement with popular sites.  When Mark explained that TrueKey could add levels of security based on my requirements, I felt even better about the technology.

With the growth in wearable devices, the landscape of mobile security is evolving.  Intel’s Chief Consumer Security Evangelist, Gary Davis, caught up with me to share Intel’s strategy for addressing this new area for consumer vulnerability.  With over 780 million wearables expected to be live by 2019, users will be increasingly using mobile devices such as smart phones and tablets as aggregators of personal data.  Today’s reality is far from pretty in terms of secure use of mobile devices with <35% of mobile users not utilizing a phone PIN and even less employing mobile security or encryption technology for data.  Intel is working on this challenge, Gary explained, by bringing security technology to mobile devices through integration in silicon as well as working with device manufacturers to design and deliver security enabled solutions to market.

Come back tomorrow for my final update from Barcelona, and please listen to my interviews with these execs and more.

Read more >

Mobile World Congress: Blurring the Lines of the Data Center and the Network

The world of mobile has descended on Barcelona with an expected 90K+ executives assembled at MWC 2015.  Many conversations here focus on the latest mobile gadgets and the advent of 5G, the 5th generation mobile network expected to reach final specifications in 2019 in advance of the 2020 Tokyo Olympics.  What 5G will bring to our devices is still not fully understood, but what is known today is that users’ insatiable demand for data-rich experiences continues its ascent.  Today, the average mobile user is consuming 2GB of data monthly, a doubling of usage within the last 12 months alone, and this data usage is pushing back-end networks from the network core to edge to innovate at an unprecedented pace. With Netflix already representing over a third of downstream internet traffic in the US, 2015 will represent the first year where we’ll stream more content from the Internet than consume from broadcast television.  Telco providers are facing this scaling user demand as well as new network traffic driven by the Internet of Things and new competition as the arenas of telecom and cloud services become further blurred. The pressure to innovate the core network to keep pace with demand has never been more acute.

Networking equipment vendors used to gather at the edges of MWC vs. their mobile device and provider customers, but since the industry began buzzing with the concepts of Software Defined Network (SDN) and Network Function Virtualization (NFV) a few years ago, the innovation in core networking equipment has taken its rightful place at center stage.  At focus is virtualization of the telecommunications network enabling telco providers to deliver new service capabilities far more nimbly than they can with traditional telecom solutions.  Where 2014’s MWC event was focused on the first proof of concept tests of NFV solutions, 2015 is focused on delivery of initial products for implementation across the network.  Today, I spoke to Steve Shaw, Director of Service Provider Marketing at Juniper Networks, about their vMX 3D Universal Edge Routers, and he pointed out that NFV enables telcos to deploy routing technology in places that the sophistication of routing would historically not reach.  This embeds greater intelligence across the network and provides more insight to the provider on real time network traffic data helping to improve service capability.  In talking to Steve, I also heard what would become a continual refrain from all of my conversations today – a demonstrated commitment for broad industry collaboration to bring these solutions to market.  Steve noted the critical importance for both east west and north south interoperability and the strategic role that orchestration software plays in connecting solutions from across the industry.

NFV is also driving broad industry innovation within virtual base stations (vBS), the devices that connect mobile users to the network edge.  By virtualizing a base station, providers are better able to address frequency issues and improve network performance and coverage capabilities to users while providing infrastructure on efficient, Intel architecture based platforms.  While vBS solutions were initially targeted for Cloud Radio Access Network (C-RAN) environments, many vendors are looking at in-building and distributed antenna system (DAS) solutions as initial beach head markets.  Imagine the rich media experience of the 21st century stadium environment, today often limited by access overload of too many mobile users in a confined physical space.  vBS promises to address this issue ensuring that coverage can more efficiently scale based on real time usage demand.  Here, broad industry innovation is also present.  I spoke to Eran Bello, VP of Products and Marketing for Israeli startup Asocs, today about their vBS solutions and he highlighted the acute interest in urban deployments for Asocs products and the importance of broad industry collaboration embracing of open standards to ensure delivery in the market.

And talking about blurred lines, Ericsson shook up the tech industry today with the announcement of new NFV fueled platforms to help telcos take their infrastructure hyperscale.  Based on Intel’s RackScale Architecture and integrated open orchestration capabilities, Ericsson’s offering will help telco operators utilize software defined infrastructure to help them compete with their cloud provider counterparts.  I spoke with Howard Wu, Head of Product Line for Cloud Hardware and Infrastructure at Ericsson, and he stated it was the company’s 139 year history in building relationships with customers that will make this venture a success given that technology innovation takes partnership and trust to result in deployment.

May sure to checkout  all my interviews from MWC Day 1 and check back tomorrow for more insights from Barcelona.

Read more >

vRAN: The Future Begins Now

By Caroline Chan, Wireless access segment manager, Network Platform Group, Intel

Slated for field trials this year and production next year, virtualized radio access network (vRAN) technology could be key to delivering better network performance, lower TCO, and additional revenue streams. Demonstrating this innovative RAN for a new era of mobility, Alcatel-Lucent, China Mobile, Intel, and Telefónica* are showcasing four usage models at Mobile World Congress (MWC) 2015. Come see this TD-LTE live demo.


What’s vRAN?


The vRAN moves baseband processing from cell sites to a pool of virtualized servers running in a data center, with the goal of making the RAN more open and flexible, while supporting both new and existing functions. Also called Cloud-RAN (C-RAN), vRAN enables service providers to dynamically scale capacity and more easily deploy value-added mobile services at the network edge to generate incremental revenue and improve the user experience. Following the ETSI network functions virtualization (NFV) model, today’s vRAN for LTE is the launching pad to 5G, where compute + communication is one of the hallmark features.


Key Benefits


Centralized broadband units (BBUs) and network resources enable: – Better network performance – Network resources that are easier to scale and load balance, and that improve interference management, particularly for heterogeneous networks.


  • Lower TCO

    • Cell sites that cost less due to reduced complexity, and simpler network operations (upgrades/repair), thanks to easily accessible, shared platforms.
  • Differentiation and new revenue streams

    • New service opportunities that could be more RAN-aware (location-based caching) and generate additional revenue.

What you’ll see at MWC’15


Our demonstration is based on an evolved platform with advanced capabilities that address two application domains for vRAN:




  1. Dynamically scale BBU capacity With mobility on the rise, it becomes increasingly more difficult to project demand, leading many service providers to over-provision baseband processing at cell sites. Alternatively, the vRAN increases/decreases BBU capacity by creating/destroying virtual machine (VMs) on-the-fly.

  2. Reduce outage impact If a cell site’s baseband processing fails, service will be interrupted until a field service engineer is dispatched on-site to fix the equipment. Avoiding a truckroll, the vRAN implements local failover, through which baseband processing is migrated to another server without dropping the call.

Content/Application Delivery Optimization


  1. Video Streaming (consumer use case) Traditional cell sites don’t have the capability to run a variety of applications. On the other hand, vRAN is designed to host different virtualized functions/applications. Our demo shows how the vRAN can perform content delivery network (CDN) functions at the network edge, thereby pushing video content at higher bit rate and improving the user experience over cloud-based CDNs.

  2. Video Conferencing (enterprise use case) The signals for mobile users on a conference call must go through the whole operator network, creating delay and consuming bandwidth. But since the vRAN can run in the same data center as a video conference application, signals just pass through that vRAN node when the attendees are within the same serving area.

Come Visit Us!


Let us show you our live demo and answer your questions. Visit us at MWC 2015 – Hall 3, stand # 3D30.

Read more >

February 2015 Intel® Chip Chat Podcast Round-up

In February, we finished archiving Chip Chat episodes from the OpenStack Summit and moved onto a few hot topics in the data center: Graphics, big data analytics and Non-Volatile Memory technologies. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!


Intel® Chip Chat:

  • Accelerating OpenStack Adoption in the Data Center – Intel® Chip Chat episode 366: In this archive of a livecast from the OpenStack Summit, David Brown, Director of Data Center Software Planning at Intel, stops by to talk about the current explosion of OpenStack adoption within telecommunication and enterprise industries, as well as the expectations for the future of OpenStack development and deployment. David also highlights the Win the Enterprise effort that Intel recently initiated which facilitates the collaboration of 75 different organizations working to drive adoption of OpenStack in the enterprise industry. For more information, visit
  • Driving Next Gen Data Centers with Intel® Cache Acceleration Software – Intel® Chip Chat episode 367: Jake Smith, the Director of Strategic Planning for the NSG Storage and Software Division at Intel, discusses Intel® Cache Acceleration Software (CAS) and how it is accelerating the next generation of data centers. He illustrates how performance can be greatly increased for I/O bound and read intensive applications when CAS is combined with Intel® Data Center Family SSDs. Jake explains how utilizing Intel® Cache Acceleration Software can even enable a whole new set of application environments in the data center through tiered storage, tiered memory, cold storage SSDs, and hybrid environments. To learn more, visit and search for Intel® Cache Acceleration Software.
  • Unlocking Big Data with Open Source Solutions – Intel® Chip Chat episode 368: Ziya Ma, Director of Big Data Technologies at Intel, stops by to talk about how open source solutions are enabling enterprise to take advantage of the new concepts coming out of big data. She highlights how Intel is a leading contributor within the overall open source community and is working to accelerate the delivery of different vertical analytics solutions. Ziya also illustrates how Intel® Architecture (IA) based big data solutions provide some of the most complete and easiest big data experiences available.
  • Integrated Graphics in the Data Center – Intel® Chip Chat episode 369: Jim Blakley, General Manager of Visual Cloud Computing at Intel chats about the benefit of having integrated graphics in the data center. He highlights how online gaming, high definition video processing, and visual understanding are all applications that use graphics based technologies and are putting increasing demand for processing and acceleration within the data center. Jim discusses technologies like Intel® Quick Sync Video and Intel® Iris™ Pro graphics and how the industry rapidly moving towards the adoption of these innovative data center graphic processing solutions.

Read more >

Intel Delivers on the Promise of the Internet of Things at Embedded World 2015

From smart buildings to retail, Intel Internet of Things solutions brought app-ready IoT platforms to key vertical markets at the Embedded World 2015 Conference in Nuremberg, Germany, last week. At this year’s event, Intel demonstrated first-hand how developers can quickly … Read more >

The post Intel Delivers on the Promise of the Internet of Things at Embedded World 2015 appeared first on IoT@Intel.

Read more >

Putting Trust at the Heart of Your Brand Identity

We’re all familiar with bank security, and it’s no wonder. We want our banks to take security seriously — after all, it’s our money and our personal information they’re protecting!

Indeed, security touches every aspect of a financial organization. You need your customers to trust your brand because, if they don’t, they’ll take their business elsewhere.


Risking the Breach

It’s not just the loss of business from a few unhappy customers that you’re risking if you get this wrong; lapses can quickly spread alarm. We’ve all seen the high-profile headlines about security breaches. Unfortunately, when it comes to banking, trust is won in drips and lost in buckets. And then there’s the financial cost. In a report published in 2015, our security partner McAfee estimates that around $400 billion is lost to cybercrime every year. At a more local level, I learned at the Sibos conference that the average loss for a U.S. bank from a cyberheist is estimated around $1.3 million, compared to just $6,000–8,000 for a physical bank robbery. According to the Symantec Intelligence Report, between November 2013 and October 2014, 583 million identities were stolen online.



There’s already a lot of great work being done to combat cybercrime, but unfortunately, fraudsters are clever, and are constantly changing their approach to try and beat the latest countermeasures. Financial institutions are the best defended against attack though, meaning cybercriminals typically go after softer targets such as retail and hospitality industries. Financial organizations are vulnerable in that they sometimes rely on industries that may not have the same level of security as they do. For example, a recent McAfee infographic estimates that we can expect 50 billion internet-connected devices by 2019, each of which presents an opportunity for a fraudster.

How to Combat the Onslaught


In essence, privacy and security are as important as performance in financial services. They’ve been a concern for a long time, and remain vital even as we adopt exciting developments like the SMAC stack, cloud, business transformation, and big data analytics. At Intel, we’re developing ubiquitous security and identity protection solutions across all our computing platforms (both on the client side and in the data center and cloud) to ensure that robust security is always there. For example, we’ve worked with a number of major banks and payment providers in Turkey to implement a two-factor authentication solution that minimizes friction for the bank’s customers and reduces cost. Keep an eye out during 2015 for further developments in this space, building on existing technologies with biometrics and additional levels of authentication for both the enterprise and consumer use cases.


To summarize the third Industrial Revolution and its implications for the financial services industry thus far, it’s started; to remain competitive, you need to be agile, innovative, and open to change as an organization. Do this by embracing new technology platforms, adopting the cloud, understanding your data through analytics, promoting cultural change internally, and staying ahead of cybercrime with effective security. Keep these points in mind, and you’ll be a revolutionary leader.


To continue the conversation, let’s connect on Twitter.


Mike Blalock

Global Sales Director

Financial Services Industry, Intel


This is the fifth installment of a seven part series on Tech & Finance. Click here to read blog 1, blog 2, blog 3, and blog 4.

Read more >

Economies of Scale in the Datacenter: Enabling Webscale for Information & Communications Technology

By Deirdré Straughan, Technology Adoption Strategist, Business Unit Cloud & IP, Ericsson


To keep up with today’s fast-changing workloads and storage demands, datacenters need distributed computing environments which can scale rapidly and seamlessly, yet remain cost-effective. This is known as hyperscale computing, and it is currently happening within a few giants (Amazon, Facebook, Google) who design their own systems – from data centers to cooling and electrical systems to hardware to firmware to software to operational methodologies – in order to achieve economics that drive down capex and opex, enabling them to be profitable providers of cloud and other massive-scale online services.


All business are becoming software companies, and we all need – but often can’t afford – this kind of “webscale information and communications technology (webscale ICT).” Most companies don’t have the in-house resources to design and manufacture bespoke systems as Amazon, Facebook, and Google are doing. Legacy IT vendors, content to maintain their current margins as long as possible, have not stepped up to fill this gap.


At the same time, we are all beginning to recognize the limitations in data security and governance that have so far deterred 74% of companies* from moving critical operations into the cloud. With daily news of data breaches across all sectors, and rapidly changing global laws on privacy and customer data, it is clear that traditional IT architectures and approaches, even when used strictly in-house, have become inadequate to today’s security needs.

Putting the C into ICT


Ericsson, which has been providing telecommunications equipment and services since 1876, approaches this problem from a different angle. We bring long, global experience in building and maintaining the real-time, reliable, predictable, and secure communications network infrastructure that our operator customers – and their billions of subscribers in the remotest corners of the globe – demand and rely upon.


We set out to analyze from first principles what it actually means to run the world’s most efficient data centers, and how their practices can be applied in every datacenter and telco central office. We have recognized a cycle in the industrialization of IT, a continuous loop of:


•   Standardization of hardware, software, operational methodologies, and economic strategy.

•   Combination and consolidation to drive highest possible occupancy, utilization, and density.

•   Abstraction, for complete programmability of all functionality and capabilities.

•   Automation of anything that is done more than three times.

•   Governance of performance, scalability, quality, economics, compliance, and security.


How is this to be achieved for hyperscale computing?

Hardware Standardization, Combination, and Consolidation


We first need an off-the-shelf system that can be designed, purchased, and managed in a completely customized fashion, able to integrate with legacy hardware and fit into existing data centers, yet capable of evolving rapidly as needs and workloads change: a software-defined, workload-optimized infrastructure, architected for hyperscale efficiency.

This is made possible by Intel’s Rack Scale architecture, which introduces features such as hardware disaggregation, silicon photonics, and software that pools disaggregated hardware over a rack scale fabric for higher utilization and performance.


Monitoring and lifecycle management software provide full awareness of every detail of hardware infrastructure and workloads – the knowledge needed to achieve new levels of capex and opex savings.



Another way to use hardware efficiently is to abstract formerly hard-wired features into software. Ericsson telco customers are already familiar with SDN (software-defined networking) and NFV (network functions virtualization), technologies that are enabling new efficiencies in telco systems. Software itself can be further abstracted and modularized via APIs.



Compelling economics, efficiencies, and ease-of-use, however, will not be enough in today’s increasingly insecure yet regulated world of data. The requirements for true data security and compliance go well beyond today’s RBAC, public-key encryption, and so on. On the front end, systems must set and enforce policy as software is deployed. Then, once data is moving through a system, its integrity must be independently verifiable wherever it goes, whenever anyone touches it, throughout its lifetime.



In the last 200 years, the telecommunications industry has brought the power of communication to an ever-larger number of the world’s peoples, at ever lower cost, resulting in unimaginable technical and social changes. The next step is to similarly democratize data compute and storage, bringing the power of IT to everyone, while maintaining the security and reliability that we expect from our telecommunications systems.


The Ericsson HDS 8000 hardware system announced today is a first step – a big step! – taken together with Intel, towards webscale ICT: massive-scale systems, reliable and secure, available to all. We don’t yet know what changes this will enable in the world – but it’s going to be fun finding out!


*Cloud Connect and Everest Group “Enterprise Cloud Adoption Survey 2014,” page 6. Also see: Cloud Connect and Everest Group “Enterprise Cloud Adoption Survey 2013”

Deirdré Straughan, Technology Adoption Strategist for Ericsson, has been communicating online since 1982 and has worked in technology nearly as long. She operates at the interfaces between companies and customers, technologists and non-technologists, marketers and engineers, and anywhere else that people need help communicating with each other about technology. Learn more at

Read more >