Recent Blog Posts

Intel’s New Innovation Engine Enables Differentiated Firmware

Historically, platform embedded firmware limits the ways system-builders can customize, innovate, and differentiate their offerings. Today, Intel is streamlining the route for implementing new features with the creation of an “open engine” for system-builders to run firmware of their own creation or choosing.

 

This important advance in platform architecture is known as the Innovation Engine. It was introduced this week at the Intel Developer Forum in San Francisco.

 

The Innovation Engine is a small Intel® architecture processor and I/O sub-system that will be embedded into future Intel data center platforms. The Innovation Engine enables system builders to create their own unique, differentiating firmware for server, storage, and networking markets. 

 

Some possible uses include hosting lightweight manageability features in order to reduce overall system cost, improving server performance by offloading BIOS and BMC routines, or augmenting the Intel® Management Engine for such things as telemetry and trusted boot.

 

These are just a few of the countless possibilities for the use of this new path into the heart of Intel processors. Truthfully, the uses for the Innovation Engine are limited only by the feature’s capability framework and the developer’s imagination.

 

It’s worth noting that the Innovation Engine is reserved for system-builder’s code, and not Intel firmware. Intel supplies only the hardware, and the system builder can tailor things from there. And as for security, the Innovation Engine code is cryptographically bound to the system-builder. Code not authenticated by the system-builder will not load.

 

As the name suggests, the Innovation Engine will drive a lot of great benefits for OEMs and, ultimately, end users. This embedded core in future Intel processors will foster creativity, innovation, and differentiation, while creating a simplified path for system-builders implementing new features and enabling full customer visibility into code and engine behavior.

 

Ultimately, this upcoming enhancement in Intel data center platforms is all about using Intel technology advancements to drive widespread innovation in the data center ecosystem.

 

Have thoughts you’d like to share? Pass them along on Twitter via @IntelITCenter, you can also take a listen to our IDF podcasts for more on the Innovation Engine.

Read more >

Network Transformation: Innovating on the Path to 5G

network-transformation-blog-banner.jpg

 

Close your eyes and try to count the number of times you’ve connected with computing today.  Hard to do? We have all witnessed this fundamental change: Computing has moved from a productivity tool to an essential part of ourselves, something that shapes the way we live, work, and engage in community.

 

Now, imagine how many times today you’ve thought about the network connectivity making all of these experiences possible.  Unless you’re like me, someone who is professionally invested in network innovation, the answer is probably close to zero.  But all of those essential experiences delivered every day to all of us would not exist without an amazing array of networking technologies working in concert.

 

In this light, the network is everything you can’t see, but you can’t live without.  And without serious innovation of the network, all of the amazing computing innovations expected in the next few years simply won’t be able to be experienced in the way they were intended.

 

At the Intel Developer Forum today, I had the pleasure of sharing the stage with my colleague Aicha Evans and industry leaders from SK Telecom, Verizon, and Ericsson, as we shared Intel’s vision for 5G networks from device to data center.  In this post, I’d like to share a few imperatives to deliver the agile and performant networks required to fuel the next wave of innovation.  IDF was the perfect place to share this message given that it all starts with the power of community: developers from across the industry working together to deliver impactful change.

 

So what’s changing? Think about the connectivity problems we experience today: dropped calls, constant buffering of streaming video, or downloading delays. Imagine if not only those problems disappeared, but new immersive experiences like 3D virtual reality gaming, real-time telemedicine, and augmented reality became pervasive in our everyday lives? In 5G, we believe it will.

 

5G is, of course, the next major upgrade to cellular connectivity and represents improved performance but even more importantly, massive increases in the intelligence and flexibility of the network. One innovation in this area is Mobile Edge Computing (MEC).  To imagine the mobile edge, imagine cell tower base stations embedded with cloud computing based intelligence, or “cloudlets”, creating the opportunity for network operators to deliver high performance, low latency services like the ones I shared above.

 

As networks become more intelligent, the services that run on them become more intelligent too. MEC will provide the computing power to also deliver Service Aware Networks, which will dynamically process and prioritize traffic based on service type and application. As a result, operators gain more control, developers are more easily able to innovate new personalized services, and users gain higher quality of experience.

 

Another exciting innovation is Anchor-Booster technology, which takes advantage of the principles of Software Defined Networking (SDN). It allows devices to take better advantage of spectrum like millimeter wave to boost network throughput by 10X or more.

 

These technologies may seem futuristic, but Intel has already been working with the industry for several years to use cloud technology to reinvent the network similar to how it reinvented the data center. We call this network transformation, and it represents moving from fixed function, purpose built network infrastructure to adaptable networks based on Network Functions Virtualization (NFV) and SDN.  Within this model, network functions now reside within virtual machines or software containers, managed by centralized controllers and orchestrators, and dynamically provisioned to meet the needs of the network.  The change that this represents to the communication service provider industry is massive. NFV & SDN are dramatically changing the rate of innovation of communications networking and creating enormous opportunities for the industry to deliver new services at cloud pace.

 

Our work is well underway.  As a key pillar of our efforts, we established the Intel Network Builders program two years ago at IDF, and since its inception it has grown to over 170 industry leaders, including strategic end users, working together towards solution optimization, trials, and dozens of early commercial deployments.

 

And today, I was excited to announce the next step towards network transformation with the Intel® Network Builders Fast Track, a new investment and collaboration initiative to ignite solution stack delivery, integrate proven solutions through blueprint publications, and optimize solution performance and interoperability through new third party labs and interoperability centers. These programs were specifically designed to address the most critical challenges facing broad deployment of virtualized network solutions and are already being met with enthusiasm and engagement by our Intel Network Builders members, helping us all towards delivery of a host of new solutions for the market.  If you’re engaged in the networking arena as a developer of solutions or a provider, I encourage you to engage with us as we transform the network together.

 

Imagine: No more dropped calls, no more buffered video. Just essential experiences delivered in the manner intended, and exciting new experiences to further enrich the way we live and work. The delivery of this invisible imperative just became much clearer.

Read more >

Delivering the Full Value of NFV Solutions: Intel Network Builders Fast Track

Russell L. Ackoff, the pioneer in operations research, said “A system is more than the sum of its parts … It loses its essential properties when it is taken apart.” That also suggests the system doesn’t exist and its essential properties cannot be observed until it is put together. This is increasingly important as communications service providers and network equipment vendors operationalize network functions virtualization (NFV) and software defined networking (SDN).

 

To date, most NFV efforts have focused on accelerating the parts – both the speed of development and net performance. OpenFlow, for example, defines communication and functionality between the control plane and the equipment that actually does the packet forwarding, and much of the initial effort has been to connect vendor A’s controller to vendor B’s router and to achieve pairwise interoperability between point solutions. Intel has been a key enabler of that through the Intel® Network Builders program. We’ve grown an ecosystem of more than 170 contributing companies developing dozens of Intel® Architecture-based network solutions.

 

But the desired vision of NFV—and what service providers tell us they need and want—is to be able to quickly assemble new systems offering new services from best-of-breed components and to enhance existing services quickly by incorporating optimized functions from a number of providers. To do that the parts must plug and play when combined into systems. That means proven integration and solution interoperability across stack layers and across networks. So that’s what we’re taking on next with the recently announced Intel® Network Builders Fast Track.

 

Intel Network Builders Fast Track builds on the progress we’ve already made with Intel® Network Builders. It’s a natural evolution for us to take with industry partners and leading service providers to move NFV closer to fulfilling its promise. Through Intel market development activities and investments we will accelerate interoperability, quicken adoption of standards-based technologies using Intel® architecture, and drive the availability of integrated, ready-to-deploy solutions.

 

Specifically, through the Intel Network Builders Fast Track we will facilitate:

 

  • Solution stack optimization—we will invest in ecosystem collaborations to optimize solution stacks and further optimize the Open Network Platform reference architecture – a toolkit to enable solutions across the industry. We are also establishing Intel® Network Builders University to drive technical education for the broader Intel Network Builders community, and deeper collaboration on performance tuning and optimizations for Intel Network Builders members.
  • Proven integration—we will publish Solution Blueprints on top use cases targeted for carrier grade deployments, and we’ll deepen our collaboration with key system integrators to deliver integrated and optimized solutions.
  • Solution interoperability—we will collaborate to establish third party compliance, performance tuning, plugfests and hackathons for use by Intel Network Builders members through new and existing innovation centers.

 

The concepts of SDN and NFV emerged when the communications industry saw what the IT industry was achieving with cloud computing—interoperable agile services, capacity on demand, and rapid time to market based on industry-standard servers and software.

 

At Intel, we played a key role in achieving the promise of cloud computing—not just with our product offerings, but with market development and our contributions to the open-source programs that have unified the industry behind a common set of interoperable tools and services. With Intel Network Builders Fast Track, we’re bringing that experience and commitment to the communication industry, so we can achieve the solutions the industry needs faster and with less risk.

 

The transformed communications systems NFV can enable will flex with the service providers’ businesses and customers’ needs in a way Russell L. Ackoff couldn’t have foreseen. And it will make them more competitive and better able to provide communications solutions to power the Internet era. We’ll achieve that co-operatively, as an industry. That’s what the Intel Network Builders Fast Track is designed to do.

Read more >

New Intel Network Builders Fast Track Igniting Network Transformation with the Intel ONP Reference Architecture

Today at IDF 2015, Sandra Rivera, Vice President and GM of Intel’s Network Platforms Group, disclosed the Intel® Network Builders Fast Track program in her joint keynote “5G: Innovation from Client to Cloud.”  The mission of the program is to accelerate and broaden the availability of proven commercial solutions through a key combination of means such as equity investments, blueprint publications, performance optimizations, and multi-party interoperability testing via 3rd party labs.

 

This program was specifically designed to help address many of the biggest challenges that the industry faces today with one goal in mind – accelerate the network transformation to software defined networking (SDN) and network functions virtualization (NFV).

 

Thanks to the new Intel Network Builders Fast Track, Intel® Open Network Platform (ONP) is poised to have an even bigger impact in how we collaborate with end-users and supply chain partners to deliver proven SDN and NFV solutions together.

 

Intel ONP is a reference architecture that combines leading open source software and standards ingredients together on a quarterly release that can be used by developers to create optimized commercial solutions for SDN and NFV workloads and use cases.

 

Whereas the Intel Network Builders Fast Track combines market development activities, technical enabling, and equity investments to accelerate time to market (TTM) for Intel Network Builder partners, the Intel ONP then amplifies this with a reference architecture. With Intel ONP, partners can get to market more quickly with solutions based on open industry leading building blocks that are optimized for industry-leading performance on Intel Xeon® processor-based servers.

 

Intel ONP Release 1.4 includes the following software for example:

 

  • OpenStack* Kilo 2015.1 release with the following key feature enhancements:
    • Enhanced Platform Awareness (EPA) capabilities
    • Improved CPU pinning to virtual machines
    • I/O based Non-Uniform Memory Architecture (NUMA) aware scheduling
  • OpenDaylight* Helium-SR3
  • Open vSwitch* 2.3.90
  • Data Plane Development Kit release 1.8
  • Fedora* 21 release
  • Real-Time Linux* Kernel, patches release 3.14.36-rt34

 

We’ll be releasing ONP 1.5 in mid September. However there’s even more exciting news just beyond release 1.5.

 

Strategically aligned with OPNFV for Telecom

 

As previously announced this week at IDF, the Intel ONP 2.0 reference architecture scheduled for early next year will adopt and be fully aligned with the OPNFV Arno• software components released in June this year.  With well over 50 members, OPNFV is an industry leading open source community committed to collaborating on a carrier-grade, integrated, open source platform to accelerate the introduction of new NFV solutions.  Intel is a platinum member of OPNFV dedicated to partnering within the community to solve real challenges in key barriers to adoption such as packet processing performance, service function chaining, service assurance, security, and high availability just to name some.  Intel ONP 2.0 will also deliver support for new products such as the Intel® Xeon® Processor D, our latest SOC as well as showcase new workloads such as Gi-LAN.  This marks a major milestone for Intel to align ONP with OPNFV architecturally and to contribute to the OPNFV program on a whole new level.

 

The impact of the Network Builders Fast Track will be significant. The combination of the Intel Network Builder Fast Track and the Intel ONP reference architecture will mean even faster time to market, a broader range of industry interoperability, and market-leading commercial solutions to fuel SDN and NFV growth in the marketplace.

 

Whether you are a service provider or enterprise looking to deploy a new SDN solution, or a partner in the supply chain developing the next generation solution for NFV, I encourage you to join us on this journey with both Intel Network Builder Fast Track and Intel ONP as we transform the network together.

Read more >

10 Mobile BI Strategy Questions: Communication

people-collaborating-in-a-business-meeting.jpgI am often amazed to discover that the lack of communication in technology projects stems not from a lack of resources but from wrong assumptions made about what’s perceived to be communication as part of a mobile business intelligence (BI) strategy. Just as we know that social media analytics isn’t just about counting Facebook likes or Twitter tweets, we should know that in mobile BI an announcement e-mail along with an attached instruction document alone isn’t synonymous with communication.

 

When developing a mobile BI strategy, you must consider all facets of communication — that includes not only multiple channels but also different formats. Moreover, you must pay attention to both quality (effectiveness) and quantity (volume and frequency) of the content to ensure its maximum effectiveness.

 

Consider All Facets of Communication

 

Don’t limit yourself to one channel or format of communication. Strive to leverage all avenues available to your team. If one doesn’t exist, explore options to develop one yourself or utilize your company’s shared service resources.

 

  • Start with the one that you know is part of the existing IT infrastructure — e-mail.
  • Include a social dimension with a collaboration or community page, especially if you have an existing one that can be used. If you don’t have one, maybe you can utilize a shared service site under the corporate umbrella.
  • Is your audience well versed with social media tools? Go with one of the many options that are easily available. They’re easy to set up and manage.
  • Create a newsletter and publish it with a fixed schedule like a newspaper.
  • Set up an online library or repository that’s easy to access and to use for key topics: report catalogs, instructions, user guides, tips, and so on.

 

But whatever you do, make sure that all of this is coordinated and accessible from a single point of collection, whether you call that your home page, community page, or something else. Last thing you want is for your users to get overwhelmed and maybe even confused about where to go — the key point when it comes to communication.

 

Test Your Communication Early in the Game

 

Just like when you’re establishing the support infrastructure, you don’t wait until the last minute. There will be many opportunities for you to test your approach and stress your communication infrastructure. Take advantage of these opportunities before you go live. As part of your interactions with your users during the development or testing phase, ask for their input, which can be your guide in developing the right content, in the right format, for the right frequency.

 

Most importantly, observe! Each opportunity to collaborate with a customer (internal or external) is an opportunity of multiple proportions. Are they tech or mobile savvy? Do they use collaboration tools or stick to e-mail? If tablets are the target device for implementation, do they have one and is it properly configured to begin with? Do they bring it to the meetings? These observations can provide you with invaluable insight into how you should shape your communication.

 

Quality Is More Important than Quantity

 

You need to be short and to the point — that goes without saying. But this rule is more important in today’s fast-paced business environment that is crammed with social media expression and a burning desire to multi-task. Any social media expert will tell you that it’s not the number of tweets you send but the quality of content you share that matters. That same principle applies to mobile BI communication.

 

You need to establish both credibility and engagement (your customers’ desire to connect with your mobile BI team) so that when they see an e-mail, tweet, or update from your team, they consider it a “must read.” Otherwise, your e-mail might fall through the cracks of their preset e-mail rule categories (Outlook’s rules and alerts tool, for example) and be deleted automatically from their inbox. Think about your personal experience for a moment. Isn’t that what separates your favorite magazine from junk mail?

 

Pay Attention to Detail

 

Attention to detail matters in communication even if the size of the message is small. Remember that your ultimate goal is to increase adoption and this can’t be accomplished if you frustrate your audience. You simply can’t afford unforced errors in mobile. At the very basic level, it requires that your communication assets (regardless of their format) are error-free and hassle-free.

 

Bottom Line: You Need to Find the Right Balance for Your Communication

 

Your communication approach must complement your overall mobile BI strategy. It becomes not only a conduit to inform your user base but also an opportunity to eliminate confusion and increase adoption. Finding the right balance for your communication is critical because it will be one more tool in your arsenal to help you achieve what matters most when it comes to business information — faster, better-informed decision making to contribute growth and profitability.

 

What do you see as the biggest communication challenge in your mobile BI strategy?

 

Stay tuned for my next blog in the Mobile BI Strategy series.

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >

Business Transformation in Mexico Is Happening Right Now

woman-workig-on-desktop-computer.jpg

In my last blog post I looked at the Mexican appetite for large-scale business transformation and the five key considerations businesses should take into account when making the transition from industrial age (static, slow and immovable) to digital age (nimble, fast and innovative).

 

In this blog I’ll take a closer look at some of the companies leading this charge, all of whom I was lucky enough to meet on my recent trip to Mexico City, and examine some of challenges they are facing.

 

Big Data Analytics – Pulling Value and Insights from Data

 

One of Latin America’s largest service providers is sitting on a goldmine of data. It has a clear focus on driving real and measurable business value from analytics and exploring how to maximise the impact of this on customers and the business itself. The next key step is for it to figure out:

 

  • What services?
  • What insights?
  • What can we charge and whom should we target?

 

Software-Defined Infrastructure at the Heart

 

Network function virtualisation (NFV) and software-defined networking (SDN) are central to this organisation’s strategy for change – a theme we are seeing accelerating in the service provider market segment. Broadly speaking NFV takes a previously fixed network function and allows you to run it on a virtual machine (VM), while SDN separates management from the fixed appliance and allows you to manage resources centrally from one location. Ultimately this move towards a more Software-defined Infrastructure (SDI) should significantly increase network agility whilst reducing cost – a key business outcome that is critical to this company’s transformation. It’s clear to me that it is keen to push forward with both NFV and SDN at high speed. If you’re after more detail on NFV and SDN then I’d strongly recommend you read this excellent blog from Jim Henrys. The organisation is also planning a large-scale private cloud, along the lines of our own OpenStack model here at Intel. The key consideration for it here is how to roll this out in an orderly and secure manner. Challenge of privatisation – how companies need to stay competitive and transform Government regulations are opening up the public sector in Mexico allowing for more competition and creating an opportunity for state-owned businesses to revisit how they stay ahead in markets that will be changing fast. One such organisation we met with knows it has to transform its business dramatically:

 

  • Firstly, agility is high on the agenda as is diversification into communities
  • Secondly, it also plans to further improve its customers centricity by driving further into people’s home, adding to the one million smart devices it already has installed across the country.

 

Cultural Change Goes Hand-in-Hand with Technological Change

 

It recognises that technology will play a key role in this modernisation but it is also under no illusion as to the importance of cultural change in this process. It won’t be an easy mountain to climb, but it certainly isn’t an insurmountable one.

 

Energy giant E.ON started out as a state-owned organisation before making the transition to a privately-owned company in the 1980s. Over more recent years it has transformed itself from a traditional utility to a modern, agile brand. It is now a key player in the digital home, offering a range of smart value-add services to attract and retain more customers in an increasingly competitive market segment.

 

man-helping-woman-in-retail-store-using-mobile-device.jpgMoving from Retail to Lifestyle Brand

 

One of Mexico’s most prestigious retail chains has already rolled out initiatives to drive customer stickiness and loyalty through tiered credit card programs and has ambitions to make the full transition from retailer to lifestyle brand.

 

Its vision is to create a full sensory customer experience, from physical stores to online, at every point in the omni-channel journey. It is very progressive and very exciting to see this level of innovation underway. The delivery of immersive, connected and safe experiences is a great way to win and retain customers – a key business outcome many enterprises are trying to achieve.

 

Inspirational, Emerging Lifestyle Brands

 

This organisation recognises that technology has to underpin this transformation and is fully bought into the SMAC stack model I discussed in my last blogpost. The full transition to lifestyle brand will not be easy, but there are companies it can look to for inspiration.

 

BMW no longer views itself as premium car manufacturer but rather as “a leading supplier of premium products and premium services for individual mobility”. It is re-imagining every aspect of its business – from how it designs and manufactures vehicles to how it engages with customers to better integrate products and services into our increasingly mobile lives. The car is only one small part of this.

 

man-using-mobile-device-by-his-car.jpg

How to Convince a Nation that Cards Are the Way Forward

 

Another company I met with has a goal to get its cards into the hands of the 60 million cardless Mexicans. Traditionally consumers and small businesses have been somewhat resistant to this. The opportunity lies in it being able to work out the best way to persuade this untapped audience that cards, rather than cash, are the way forward.

 

One example looks at getting cards into the 100,000 taxis operating in Mexico City. These drivers are currently facing stiff competition from Uber, which has taken the city by storm. Technology – and specifically the use of SMAC – is integral to this company’s business model, meaning its drivers are more traceable and customers are able to directly read other customer feedback. The organisation I met with will be looking to replicate this sort of success for its customers and partners.

 

From Traditional to Digital Business

 

Overall, my conversations with customers in Mexico City were fascinating. It became clear to me over the course of my trip that the train towards digital business has well and truly left the station in Mexico. Many of the customers I spoke with are now looking at how to remove the obstacles on the track that are preventing them from putting their foot fully on the accelerator. I believe this is an area where Intel can help a great deal.

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or us #ITCenter.

Read more >

NOdoop: Not Only Hadoop and What Comes after Map Reduce

mrvsdag.png

From some of my previous posts on the impact of analytics and BI at Intel, the evolution of Intel IT’s use of Big Data, and the migration to Cloudera from another Hadoop distribution, you might get the impression that Hadoop and its native Map Reduce processing model is all that there is to Big Data.  In this presentation from the 2015 Hadoop Summit in San Jose, Intel IT’s Seshu Edala and Joydeep Ghosh look at what kind of Big Data use cases do not work well with map reduce.  They describe their investigation of up and coming technologies that might do better on these use cases.

 

How can Map Reduce be problematic?  Intermediate results need to be written to storage.   While this may not be a problem for many batch processing jobs, use cases that iteratively process data, such as an analysis of continuously streamed log data, these intermediate writes to storage like disk can drastically slow processing.  As a data stream is split and sent through a number of analysis functions, the processing can be modeled as a Directed Acyclic Graph (DAG).  Map Reduce is not particularly efficient for handling this kind of graph processing. We show storage writes in Map Reduce vs a Generic DAG problem in the diagram to the right.

 

Much of Edala and Ghosh’s presentation is a look at the technologies that would be efficient and effective at handling DAG type problems.  You can look at their presentation for their conclusions, but one of the more promising technologies is called Spark.  Spark was developed in UC Berkeley AMPlab, commercialized by the company Databricks, and supported in Cloudera’s Hadoop Distribution. Another consequence of this looking at post Map Reduce technologies is that we have to rethink how Hadoop will fit with different Big Data technologies and technologies evolve.  The diagram below shows how the original Hadoop/Map Reduce combination (with green fill) will evolve over time.

 

A video of their session is available online, as is a Slide Share of their presentation materials.

hadoopevolve.png

Read more >

The Red Rock Canyon 100GbE Demo Tour Comes to IDF15

By Gary Lee, Ethernet Switch Product Manager at Intel

 

 

Intel’s 100GbE multi-host controller silicon, code-named Red Rock Canyon, has been on a worldwide demo tour, with stops at four major industry events in Europe and the U.S. since it was disclosed one year ago at Intel Developer Forum (IDF) San Francisco 2014.

 

And the tour continues at IDF San Francisco 2015 this week, with presentations and live demos in five customer booths.

 

Red Rock Canyon is designed to provide low-latency PCIe 3.0 connections into Intel Xeon® processors in Intel® Rack Scale Architecture applications or Ethernet-based connections into Intel Atom™ processors for dense microserver deployments. The product is also ideal for high-performance Network Functions Virtualization (NFV) applications, providing flexible high-bandwidth connections between Intel Xeon processors and the network.

 

Here’s where you can see Red Rock Canyon in action at IDF 2015:

 

Quanta and EMC at the Intel booth: This demo will use a Software development platform from Quanta showing Intel Rack Scale Architecture software asset discovery and assembly using OpenStack. Also on display in this rack will be a 100GbE performance demo and a software defined storage application. ¬

Intel Rack Scale Architecture and PCN at the Intel booth: This demo will use a software development platform from Quanta demonstrating OpenDaylight and big data (BlueData) running on an Intel Rack Scale Architecture system.

 

At the Huawei Booth: Huawei will show its Intel Rack Scale Architecture-based system based on its X6800 server shelf, which includes Red Rock Canyon.

 

At the Inspur Booth: Inspur will show its new Intel Rack Scale Architecture platform, which will include a live demo of Red Rock Canyon running data center reachability protocol (DCRP), including auto-configuration, multi-path and failover.

 

At the Advantech Booth: Advantech will show its new FWA-6500R Intel Xeon processors-based 2U network application platform, which uses a multi-host Red Rock Canyon based switch module to connect these processors to the network through flexible Ethernet ports.

 

400G NFV Demo: This is a showcase of a new NFV datacenter-in-a-box concept featuring a scalable 400G NFV infrastructure using Network Services Header (NSH) and multiple 100Gbps servers. The 400Gbps front-end NFV server is based on Intel Xeon processors that take advantage of Data Plane Development Kit (DPDK) to deliver performance that matches interface drivers for Red Rock Canyon, and Intel Scalable Switch Route Forwarding (S2RF) for scalable load balancing and routing.

 

In addition to these physical product demonstrations, an NFV technical session titled “Better Together: Service Function Chaining and the Road to 100Gb Ethernet” will be held to present a service function chaining use case based on Intel technology. This session will be held on Aug. 20 at 2:15 pm. Search for session NFS012 the IDF 2015 website. An additional Tech Chat about Red Rock Canyon applications will be held on Aug. 18 at 1 pm.

Read more >

Transforming the Datacenter: A New Era in Storage & Memory Technologies

By David Cohen, Intel Corp

 

 

With the arrival of new non-volatile memory (NVM) technologies, we are suddenly in the midst of the biggest data center transformation in the past 30 years. Data centers are now poised to move data at unprecedented speeds.

 

This isn’t hyperbole. This is the way it will be with the implementation of solutions built around the new technologies like NVM Express* (NVMe) Over Fabrics, NVMe in PCI Express, and 3D XPointTM. These technologies will bring down the cost of non-volatile memory and replace hard disk drive (HDD) with solid-state storage—while taking storage performance to unprecedented levels.

 

A case in point: The new 3D XPoint (pronounced “3D cross-point”) technology from Intel and Micron enables NVM speeds that are up to 1,000 times faster  than NAND, today’s most popular non-volatile memory. With its unique material compounds and cross-point architecture, 3D XPoint technology is 10 times denser than conventional memory.  We’re talking about a category of NVM that has the potential to revolutionize any device, application, or service that can benefit from fast access to large sets of data.

 

Take a closer look at 3D XPoint technology.

 

Let’s take a step back and look at the bigger picture. In enterprise data centers, spinning disk drives (HDDs), which continue to carry a lot of the data storage load, have always been really slow (in relative terms) while everything else has been really fast. This amounts to a bottleneck in the application performance pipeline. At some level, it doesn’t matter how fast today’s processors and network switches are when overall performance is tied to the speeds of yesterday’s data storage devices.

 

We are now in the process of rewriting this tired equation. With the arrival of solutions based on the new non-volatile storage technologies, storage will be really fast and, in comparison, everything else will be slower. For the software developer, this new reality of lightning-fast storage creates an imperative to optimize other parts of performance pathway to remove overhead that causes latency.

 

Explore the evolution of storage media architectures.

 

In essence, the goal is to move operations that are not critical to performance out of the performance path—such as bookkeeping functions related to the management of transactions and data replication operations that could take place elsewhere. The idea is to tease latency out of the system and allow all things to happen in parallel. The ultimate goal is a balanced system that capitalizes on the full potential of the latest server, storage, and networking components.

 

At Intel, we are committed to making our end customers successful in the transformation to next-generation silicon-based storage. To that end, we are working actively with our ecosystem partners and end-user customers to help ensure that software is addressed in the right way—so that operating systems and applications can gain the greatest benefits from faster storage. At the same time, we are working closely with industry organizations to make sure there are standards in place that allow software developers and OEMs to capitalize on new storage technologies in a uniform way.

 

Here’s the bottom line: The future is upon us. It’s ours to make of it what we will. To maximize the potential of storage solutions, we must first embrace new technologies like NVM Express and Next Generation NVM, and then work actively to optimize the associated software for the new capabilities of NVM.

 

Our success in these efforts will throw open the doors to the next generation of data centers.

 

For a closer look at the new non-volatile memory technologies, and the future of storage itself, visit intel.com/storage.

 

 

 

 

1 Performance difference based on comparison between 3D XPoint technology and other industry NAND.

2 Density difference based on comparison between 3D XPoint technology and other industry DRAM.

Read more >

Remembering Intel’s Response to Hurricane Katrina, 10 Years Later

Ten years ago, just days after hurricane Katrina battered the southern states of the US, I received and urgent call from work late at night. 

 

“…make your way to Austin TX.  We are setting up a logistics center for tech deployment into affected areas in conjunction with the American Red Cross.  Be there tomorrow by 10am.” 

 

It was a call from fellow IT employee, temporarily working in a hastily organized crisis center.  I sent a quick email to my boss and team, stating where I was going.  The response was simply, we have your back.  Do what is needed.  Six hours later while most of the world was captivated at images of the destruction on television, I was on a plane heading to help and not knowing what to expect.

 

Intel Corporation has a long history of providing aid and assistance for people after global catastrophes.  The employees donate their time and money.  The company matches employee contributions, donates equipment, and sends relief through response organizations.  In some cases, for the most severe circumstances, Intel also sends its most valuable resource into the field, our experts.

 

After landing in Austin, I joined a small advanced team at the American Red Cross (ARC) IT logistic center.  Corporate volunteers from Intel, Dell, and Cisco were there to help develop systems, networks, and telecommunications solutions to allow ARC field personnel to register victims, issue relief funds, and help people find missing family members.  More highly skilled volunteers came flooding in to join the team.  We were asked to build PC kits which included networking and telecommunications which could be deployed.  Except, there were no components to use and the platform, consisting of hardware, operating systems, and applications, was not architected.  Companies quickly began leveraging industry relationships to acquire the necessary devices and software.  Intel rerouted and donated a large shipment of PCs which our IT department purchased for employees.  Dell and Cisco did the same for products earmarked for other customers.  We jumped in our cars and raided every electronics store in the city to fill in all the other necessities, such as keyboards, mice, network cards, power strips, etc.  In short order, the loading dock was filled with gear. 

KatrinaRelief.jpg

The teams began working on software modifications, network configurations, a base image, and then building the kits in an ad hoc assembly line.  Each one was assembled, tested, and then broken down to fit in travel containers we modified by hand.  After a long day, we had a solution architected, kits built, and filled in a semi-trailer ready to go into the field.  Transport was being handled by a major trucking company.  Hours after the truck pulled away, we began receiving calls from the field asking where the equipment was.  They were in dire need and first sites had not received their scheduled delivery.  A quick call to the transport company revealed they put the shipment on hold.  The roads were not safe, electricity was still out of large swaths of the south, and fuel availability was unreliable in the affected areas.  This led law enforcement to setup roadblocks and hold back most traffic.  

 

We knew, as part of the relief effort we could get through the checkpoints.  So we asked for the trailer back, but the trucking company refused.  We would not get it back for another day or more.  Hearing what was going on in the field, that was just unacceptable.  Our choice was clear.

 

The bulk of the team was in the break room joyfully relaxing and feasting on pizza after a long day of work in a hot warehouse, when we informed them of the situation.  All became quiet.

 

“What do we do?” 

 

Not knowing if the volunteers would agree after such a grueling day, we proposed doing it all over again.  Build the kits and find a more reliable way of transporting them that night.  Without a single complaint, every last person stood up with gritty determination and filed back into the warehouse.  Then the real challenges began.  We didn’t have enough components left or cases which would fit everything. 

 

Dig deep.  It is times like these when I fully appreciate working with creative, motivated, and relentless problem solvers.  I assembled a team to solve the case problem and figure out how to get the components into a box half the size.  Admins were assigned to procure the necessary components from local stores.  The technologists were challenged with making the software builds install faster and with a greater success rate.  I pulled the line managers and asked them to find a way to assemble the kits faster and designated a safety officer to oversee the health of the volunteers and insure tired people were not being run over by forklifts or crushed by falling cases.  After another long shift, the new kits were built.  It was close to midnight and many had been working nonstop since 6am. 

 

But there was another problem.  None of the transportation companies could make the deliveries.  The kits needed to be dropped off in several locations across 4 states.  We tried every avenue, but nobody could get these cases where they needed be. 

 

Dig deeper.  It was time to take matters into our own hands.  I asked for volunteers to drive that night from Austin, eastbound into Louisiana, Alabama, and Mississippi.  I told them I would lead a caravan to crisscross the states and drop kits and support personnel to affected areas.  We advised them of the dangers and warnings the federal emergency team provided to us. 

 

I was shocked.  Exhausted people, covered in sweat and dirt, raised their hands to volunteer.  In a moment I will never forget, these people who were most comfortable in cubicles and labs were willing to go into the night, in areas deemed as unsafe, to answer the call of helping others.  We could not guarantee a ride home, but committed we would track them and get them back as soon as we could, after the kits were setup.  That did not discourage anyone.  Helping others was their mission.  We grabbed sleeping bags, bottles of water, and bug spray, then headed into the night.  

 

Over the next 23 hours we drove across the area affected by Katrina, which killed over 1200 people and resulted in $108 billion in property damage.  The storm displaced millions of people and disrupted communities across the south who struggled to deal with the waves of Americans trying to find normalcy.  We passed emergency vehicles from dozens of neighboring states who came down to help.  Along the way we dropped off kits and volunteers to aid stations, community centers, and schools converted into shelters.  My team ended up at the southernmost tip of Louisiana, setting up a satellite uplink for a remote aid station while power companies were furiously working to restore power.  There was a moment when I stood on the coastal road at the waters edge and looked into the gulf.  Helping those in need brought me to that place.  It was both beautiful and peaceful.

 

During my time we saw devastation, riots, hysteria, and an unbelievable number of displaced citizens.  We also saw hope, faith, fierce independence, sacrifice, and indomitable resiliency.  I spent two weeks in the field and came back with a lifetime of memories.  The volunteers I had the pleasure to serve with were intelligent, passionate, focused, and committed.  I saw companies rise beyond the desires for profit and truly give their very best in a time of need.  Walmart gave away water and critical supplies while maintaining the most amazing supply chain, even to remote areas.  Budget rentals supplied hundreds of trucks which were used to deliver equipment, water, and other supplies.  Tech partners Dell and Cisco sent their brightest to solve problems and empower technology to help ARC in their mission.  Intel, for its part, contributed on a number of fronts, including supporting a telethon to raise money, donating millions of dollars and equipment, and sent a few crazy people like me into the mix.  Although we were volunteers, Intel management paid us and allowed the use of corporate funds to purchase equipment and supplies needed in the field.  Every Intel employee who volunteered came back to their job, without any negative impacts to role or position. 

 

Where there are natural disasters and catastrophes, you will find Intel volunteers taking up the cause for recovery.  Over the past decade Intel has responded to calls for assistance in the aftermath of tsunamis, earthquakes, hurricanes, and typhoons.  Some donate money and relief items, others commit their time, and a few even put their boots on the ground.  Volunteers all contribute in their own valuable way and the corporation and management go to incredible lengths to support these efforts.  For a company perceived as full of computer nerds, geeks, and engineers who hide in cubicles and only think tech, I challenge that notion.  Intel employees have a strong sense of community and responsibility, and even ten years after being part of Intel’s hurricane Katrina team, I am proud to stand and work among them.

 

Twitter: @Matt_Rosenquist

Read more >

It’s Here: The Convergence of Memory and Storage

For years, people have been talking about the coming convergence of memory and storage. To this point, the discussion has been largely theoretical, because the affordable technologies that enable this convergence were not yet with us.

 

Today, the talk is turning to action. With the arrival of a new generation of economical, non-volatile memory (NVM) technologies, we are on the cusp of the future—the day of the converged memory and storage media architecture.

 

The biggest news in this story is the announcement by Intel and Micron of 3D XPoint technology, which will enable a new generation of DIMMs and solid state drives (SSD). This NVM technology, couples storage-like capacity with memory-like speeds.

 

While it’s not quite as fast as today’s DDR4 technology, 3D XPoint (pronounced 3D Cross Point) is 1000x faster than NAND and has 1000x greater endurance. Intel DIMMs based on 3D XPoint will support up to 4X more system memory per platform, compared to using only standard DRAM DIMMs, and are expected to offer a significantly lower cost-per-gigabyte compared to DRAM DIMMs.

 

With the enormous leaps in NVM performance offered with 3D XPoint technology, latencies are so low that for the first time NVM can be used effectively in main system memory, side by side with today’s DRAM-based DDR4 DIMMs. Even better, unlike DRAM, the Intel DIMMs will provide persistent memory, so that data is not erased in the event of loss of power.

 

The biggest net gain is that a lot more data will be stored in memory, where it is closer to the processors. This is a hugely important advance when it comes to accelerating application performance. Intel DIMMs will allow for faster analysis and simulation results from more complex models, fewer interruptions in service delivery to users, and drive new software innovations as developers adjust their applications to take advantage of rapid access to vastly more data.

 

Elsewhere in the storage hierarchy, 3D XPoint technology will be put work in Intel SSDs that use the NVM Express* (NVMe*) interface to communicate with the processors. Compared to today’s alternatives, these new SSDs will offer much lower latency and greater endurance.

Intel SSDs will be sold under the name Intel® OptaneTM technology, and will be available in 2016. The upcoming 3D XPoint technology based DIMMs will be available in the next all-new generation of the Intel data center platform.

 

The good news is, these next-generation NVM SSDs and DIMMs are coming soon to a data center near you. Their arrival will herald the beginning of the era of the converged memory and storage media architecture—just in time for an onslaught of even bigger data and more demanding applications.

 

 

 

For performance info on 3D XPoint, please visit: http://www.intel.com/content/www/us/en/architecture-and-technology/non-volatile-memory.html.

Read more >

Implementing Software Defined Infrastructure for Hyper-Scale

Earlier this summer, Intel announced our Cloud for All initiative signaling a deepening engagement with the cloud software industry on SDI delivery for mainstream data centers.  Today at IDF2015, I had my first opportunity post the announcement, to discuss why Cloud for All is such a critical focus for Intel, for the cloud industry, and for the enterprises and service providers that will benefit from enterprise feature rich cloud solutions. Delivering the agility and efficiency found today in the world’s largest data centers to broad enterprise and provider environments has the opportunity to transform the availability and economics of computing and reframe the role of technology in the way we do business and live our lives.

 

Why this focus? Building a hyperscale data center from the ground up to power applications written specifically for cloud is a very different challenge than migrating workloads designed for traditional infrastructure to a cloud environment.  In order to move traditional enterprise workloads to the cloud, either an app must be rewritten for native cloud optimization or the SDI stack must be optimized to support enterprise workload requirements.  This means supporting things like live workload migration, rolling software upgrades, and failover. Intel’s vision for pervasive cloud embraces both approaches, and while we expect applications to be optimized as cloud native over time, near term cloud adoption in the enterprise is hinged upon SDI stack optimization for support of both traditional applications and cloud native applications.

 

How does this influence our approach of industry engagement in Cloud for All?  It means that we need to enable a wide range of potential usage models while being pragmatic that a wide range of infrastructure solutions exists across the world today.  While many are still running traditional infrastructure without self-service, there is a growing trend towards enabling self-service on existing and new SDI infrastructure through solutions like OpenStack, providing the well-known “give me a server” or “give me storage” capabilities…  Cloud Type A – server focused.  Meanwhile SW developers over the last year have grown very fond of containers and are thinking not in terms of servers, but instead in terms of app containers and connections… a Cloud Type B – process focused.  If we look out into the future, we could assume that many new data centers will be built with this as the foundation, and will provide a portion of the capacity out to traditional apps.  Convergence of usage models while bringing the infrastructure solutions forward.

 

potential-path-to-sdi-graphic.png

The enablement of choice and flexibility, the optimization of the underlying Intel architecture based infrastructure, and the delivery of easy to deploy solutions to market will help secure broad adoption.

 

So where are we with optimization of SDI stacks for underlying infrastructure? The good news is, we’ve made great progress with the industry on intelligent orchestration.  In my talk today, I shared a few examples of industry progress.

 

I walked the audience through one example with Apache Mesos detailing how hyper-scale orchestration is achieved through a dual level scheduler, and how frameworks can be built to handle complex use cases like even storage orchestration.  I also demonstrated a new technology for Mesos Oversubscription that we’re calling Serenity that helps drive maximum infrastructure utilization.  This has been a partnership with MesoSphere and Intel engineers in the community to help lower the TCO of data centers; something I care a lot about…. Real business results with technology.

 

I also shared how infrastructure telemetry and infrastructure analytics can deliver improved stack management. I shared an example of a power & thermal aware orchestration scheduler that has helped Baidu net a data center PUE of 1.21 with 24% of potential cooling energy savings.  Security is also a significant focus, and I walked through an approach of using Intel VT technology to improve container security isolation.  In fact, CoreOS announced today that its rkt 0.8 release has been optimized for Intel VT using the approach outlined in my talk, and we expect more work with the container industry towards delivery of like security capabilities present only in traditional hypervisor based environments.

 

But what about data center application optimization for SDI?  For that focus, I ended my talk with the announcement of the first Cloud for All Challenge, a competition for infrastructure SW application developers to rewrite for cloud native environments.  I’m excited to see developer response to our challenge simply because the opportunity is ripe for introduction of cloud native applications to the enterprise using container orchestration, and Intel wants to help accelerate the software industry towards delivery of cloud native solutions.  If you’re an app developer, I encourage you to engage in this Challenge!  The winning team will receive $5,000 of cold, hard cash and bragging rights at being at the forefront of your field.  Simply contact cloudforall@intel.com for information, and please see the preliminary entry form.

Read more >

Using 2 in 1s for Disruptive Innovation at Front Porch

In a time of rapid change, innovation is crucial for any enterprise. But I haven’t seen many organizations approach innovation as thoughtfully and systematically as Front Porch. This California-based nonprofit supports a family of companies offering assisted living, skilled nursing, retirement, and other communities across four states.

 

Front Porch has a Center for Innovation and Wellbeing as well as a commitment to disruptive, caused-based innovation called Humanly Possible℠. “We want everyone at every part of our organization to focus on what’s possible and what’s next—to look at how we can do what we do better, to bring new value to people we serve,” says Kari Olson, chief innovation and technology officer for Front Porch and president of its innovation center.

                     

Olson and other Front Porch leaders were quick to see value in flexible 2 in 1 devices based on Intel® technologies and Windows.

 

“Two-thirds of our workforce are out and about, not sitting at a desk,” Olson says. “If we can give them portable devices that let them do their computing in a secure, reliable way, when and where they need to, we can have a big impact—both on their productivity and on our ability to meet the needs of the people we serve. If we can do that and stay consistent with our enterprise applications and tools—that’s huge.”


front porch.jpg

Front Porch staff saved time and increased patient engagement by using their 2 in 1 devices in members’ residential rooms, care centers, activity rooms, team meetings, and other settings.


But could 2 in 1 devices help deliver transformative value? And how would Front Porch’s people-focused helping professionals—who often have an “I’ll use it if I have to” attitude toward technology—feel about the new devices?

 

Intel just completed a case study that answers these questions. In it, Front Porch leaders describe surprises they encountered as employees ranging from nurses to activities coordinators began using 2 in 1s. Front Porch shares best practices for mobile technology adoption, and highlights the benefits they’re seeing for patient engagement, organizational efficiency, quality of care, and more.

 

I found their results fascinating. They’re relevant not just for healthcare, but for any organization that wants to empower a mobile workforce.

                                                                                                                                                                            

Read the case study and let me know your thoughts. Where might enterprise-capable 2 in 1s add value in your organization? Post a comment, or join and participate in the Intel Health and Life Sciences Community.

 

Learn more about Intel® Health & Life Sciences.

 

Read more about Front Porch and the Front Porch Center for Innovation and Wellness.

 

Stay in touch: @IntelHealth, @hankinjoan

Read more >

Intel Rack Scale Architecture 1.0 Ready for Developers

By Jay Kyathsandra, Intel



The first Intel Rack Scale Architecture Developer Summit kicks off a busy week for the Intel Developer Forum 2015

 

Over the past months as Intel has been preparing to roll out Intel Rack Scale Architecture, industry partners, software ISVs, and developers have not been waiting idly on the sidelines. With high interest from cloud service providers, telcos, and enterprises focused on next-generation software-defined infrastructure and big data implementations, Intel Rack Scale Architecture has been developing along with a vibrant and growing ecosystem of supporting standards bodies, OEMs, and ISVs.

 

Why all the industry support for Intel Rack Scale Architecture? By defining a logical architecture that disaggregates and pools compute, storage, and network resources, rack scale architecture can greatly simplify the management of these resources. Even better, the rack scale approach enables data center operators to dynamically compose resources based on workload-specific demands, enabling user-defined performance, higher utilization, and interoperability—an essential capability for cloud deployments. And now Intel Rack Scale Architecture is ready for developers.

 

The architecture was officially presented to a very receptive audience at today’s Intel Rack Scale Architecture Developer Summit in San Francisco. Kicking off the event was Ryan Parker, Intel General Manager, who shared insights into the next generation cloud infrastructure trends. Other speakers included OEM partners announcing upcoming products, implementation and deployment scenarios, technical experts, and early customer testimonials. A popular topic, data center security and trends, led by Intel VP Curt Aubley, wrapped up the presentations.

 

For those attending IDF, there will be several demos featuring Intel Rack Scale Architecture on the show floor in the Data Center and Software Defined Infrastructure Community. Look for more information to become available for solution developers in the coming weeks. Detailed specifications and APIs will soon be available for download on Intel.com/IntelRackScaleArchitecture.

 

If you would like to learn more about Intel Rack Scale Architecture and how it will re-architect the data center of today, check out this video.

 

Read more >