Recent Blog Posts

Part III: Future Health, Future Cities – Intel Physical Computing Module at IDE

by Chiara Garattini & Han Pham

 

Read Part I

Read Part II

 

In the third and final edition of our Future Cities, Future Health blog series, we will look at the final theme around Mapping Cities (Creatively) which showcases the creative ideas of allocating healthcare resources and using sound to produce insights into complex health data as part of the physical computing module on the Innovation Design and Engineering Masters programme run in conjunction between Imperial College and the Royal College of Arts (RCA).


Mapping cities (creatively)

In considering how to allocate resources, we also need to understand where resources are most needed, and how this changes dynamically within a city.

 

3A.jpg

Figure 1. Ambulance Density Tracker


Antoni Pakowski asked how to distribute ambulances within a city to shorten response times for critical cases, and suggested this could be supported by anonymous tracking of people via their mobile phones. The expected service window of ambulance arrival in critical care cases is 8 minutes. However, in London, only around 40 percent of calls meet that target. This may be due to ambulances being tied to a static base station. How can the location of the ambulance change as people density changes across a city?

 

The ambulance density tracker (Figure 1) combined a mobile router and hacked Pirate Box to retrieve anonymously the IP of phones actively seeking Wi-Fi to create a portable system to track the density of transient crowds. The prototype was designed to only rely upon one point of data within a certain region, requiring less processing than an embedded phone app. He also created a scaled down model of the prototype, to suggest a future small device that could potentially be affixed to static and moving infrastructure such as taxis within the city.

 

Although the original use case needs additional design work to be clearer, the prototype itself as a lightweight, anonymous device that allows for a portable proxy of transient crowd density may be useful as a complementary technology for other design projects geared toward designing for impromptu and ad hoc health resources within a city based on audience shifts.

 

3B.jpg

Figure 2. ‘Citybeat’


The second project in this category is called ‘Citybeat’ by student Philippe Hohlfeld (Figure 2). Philippe wanted to look at the sound of a city and create not only ‘sound’ maps of the city, but also capturing the ‘heartbeat’ of a city by exploring ‘sonified’ feedback from it. His thinking originated from three distinct scientific endeavours: a) turning data from the Higgs Boson Atlas preliminary data at CERN into a symphony to celebrate the connectedness of different scientific fields; b) turning solar flares into music at the University of Michigan to produce new scientific insights; and c) a blind scientist at NASA turning gravitational fields of distant stars into sound to determine how they interact.

 

The project looked specifically at the Quality of Life Index (safety, security, general health, culture, transportation, etc.) and tried to attribute sounds to different elements so to create a ‘tune’ for each city. Sonification is good for finding trends and for comparison between two entities. What we most liked of the project though, was the idea of using sound rather than visual tools to produce insights into complex data.


Personal data from wearables, for example, is generally often in visual dashboard. Even though these are meant to simplify data fruition, they not always do. Sound could be quicker than visual displays in expressing, for example, rapid or slow progress (e.g. upbeat) or regress (e.g. downbeat). In the current landscape of information overload, exploring sound as alternative way of summarizing usage is something we thought very interesting.

3 - b - Bee Gate.png

Figure 3. ‘Bee gate’

Finally, the last selected project in this list is also one of the most unusual ones. Student James Batstone wanted to think of how bees interact with polluted environments and how they could be used as part of reclamation or decontamination programmes. He imagined a city (or territory) abandoned due to pollution, and of using bees to collect and analyse pollen to establish whether the territory was ready for being reclaimed to human habitation.

He built a prototype with ‘bee gates’ that would allow for the harmless capturing of pollen from the individuals insects when returning to the hive (Figur3). He also theorised to complement this with an automated software that used cameras to track and automatically analyse their dance to establish provenance. What we liked about this project is the imaginative idea of using bees to monitor air and land quality by analysing vegetation through their pollen, as well as radiation and pollutants in honey, to create maps of lands quality levels. Using natural resources and occurring events to complement what technology can do (and vice versa) is the way to achieve sustainable solutions in the long term.

 

Final thoughts

As part of our work at Intel, we collaborate with the world’s top universities to look at the future of cities with an eye toward the intersection of technology, environment, and social sustainability. In our groups one can find entrepreneurs, designers, hacktivists, engineers, data artists, architects and more.

 

We seek to support the same diversity of inspiration in today’s students as the future technology innovators by tapping into how to connect creativity to the technology for more vibrant, connected cities and communities. In many ways, working with first year master’s students is a refreshing perspective of how to open these questions with a beginner’s mind-set by suggesting how embrace simplicity in the face of rising information – just because our digital traces and data footprint will be increasing, our time to juggle what that means won’t.

 

Physical computing is coming into play in new ways, more often. It will not be enough to get lost in a screen – the interface of tomorrow will be everywhere, and interactions leap off screens into the real world. ‘Future Health, Future Cities’ suggested how to consider the role of physical computing in helping create more sustainable services by, for example, making transparent what and where the need for services are, by exploring how to communicate simply and well new urban information streams, and, last but not least, by reflecting on how to deliver resources where it will be most needed in a constantly changing city.

 

 

*Concepts described are for investigational research only.

**Other names and brands may be claimed as the property of others.

Read more >

Intel at Citrix Synergy 2015: Delivering a Foundation for Mobile Workspaces

From May 12-14, Citrix Synergy 2015 took over the Orange County Convention Center in Orlando, providing a showcase for the Citrix technologies in mobility management, desktop virtualization, server virtualization and cloud services that are leading the transition to the software-defined workplace. Intel and Citrix have worked [together closely] (https://www.youtube.com/watch?v=gsm26JHYIaY) for nearly 20 years to help businesses improve productivity and collaboration by securely delivering applications, desktops, data and services to any device on any network or cloud.  Operating Citrix mobile workspace technologies on Intel® processor-based clients and Intel® Xeon® processor-based servers can help protect data, maintain compliance, and create trusted cloud and software-defined infrastructures that help businesses better manage mobile apps and devices, and enable collaboration from just about anywhere.

 

During Citrix Synergy, a number of Intel experts took part in presentations to highlight the business value of operating Citrix software solutions on Intel® Architectures.

 

Dave Miller, director of Intel’s Software Business Development group, appeared with Chris Matthieu, director of Internet of Things (IoT) engineering at Citrix, to discuss trends in IoT.   In an interview on Citrix TV, Dave and Chris talked about how the combination of Intel hardware and Intel-based gateways and the Citrix* Octoblu IoT software platform make it easy for businesses to build and deploy IoT solutions that collect the right data and help turn it into insights to improve business outcomes.

 

Dave looked in his crystal ball to discuss what he saw coming next for IoT technologies. He said that IoT’s initial stages have been about delivering products and integrated solutions to create a connected IoT workflow that is secure and easily managed. This will be followed by increasingly sophisticated technologies for handling and manipulating data to bring insights to businesses. A fourth wave will shift the IoT data to help fuel predictive systems, based on the increasing intelligence of compute resources and data analytics.

 

I also interviewed David Cowperthwaite, an engineer in Intel’s Visual and Parallel Computing Group and an architect for virtualization of Intel Processor Graphics. In this video, we discussed how Intel and Citrix work together to deliver rich virtual applications to mobile devices using Citrix* XenApp.  David explained how running XenApp on the new Intel® Xeon® processor E3 v4 family  with Intel® Iris™ Pro Graphics technology provides the perfect platform for mobile delivery of 3D graphics and multimedia applications on the highly integrated, cartridge-based HP* Moonshot System.  

 

One of the more popular demos showcased in the Intel booth was the Intel® NUC and Intel® Compute Stick as zero-client devices.  Take a live look in this video.  We also released this joint paper on XenServer, take a look.

 

For a more light-hearted view of how Citrix and Intel work together to help you Work Anywhere on Any Device, watch this fun animation.

Read more >

Creating Value—and Luck—in Hospitality. Part I: Charting the Future of the Hotel Experience

I travel a lot for work, and the truth is that it can be a pretty painful experience. I spend a lot of time thinking about how travel and hospitality will be in the future, but perhaps I think about it most when I’m stuck in yet another airport, surrounded by screaming children, wishing I was anywhere but there. What new experiences are going to be available five or 10 years from now? From the moment I leave my house in the morning to the moment I order my guilty snack from room service at midnight, how will that whole process be made better? When you’ve just traveled half-way around the world, you’re dog tired, and waiting in line to check in to your hotel, there are no more important questions.

 

As luck would have it, these are just the sorts of issues Intel’s research team of trained ethnographers, anthropologists, and social scientists are exploring. And what they are finding is giving us a glimpse of an exciting retail, hospitality, and entertainment future distinguished by amazing convenience and control for guests and unprecedented opportunity for the hospitality industry.

 

Guest-Uses-Connected-Technology-In-Hotel-Room.pngIt’s All About the Customer

 

For hoteliers, the successful ones anyway, it starts with one overriding goal: Deliver the best possible guest experience. To do that means getting to know guests better, learning their likes and preferences, and then delivering high-quality services and experiences that are personalized to their needs.

 

Travelers are becoming savvier and more demanding. Only through that deeper relationship with guests can hotels expect to offer them the truly customized and personalized experiences needed to win and sustain our loyalty.

 

Customization vs. Personalization

 

Let’s start with customization. It’s different than personalization, which I’ll get to in a minute. When we’re talking about customized experiences, we are referring to the ability to deliver on the specific requests of customers: I like the top floor, I have egg white omelets and fruit each morning, I want my room kept at a steady 68 degrees. Based on those identified preferences, brands can tailor the experience to us.

 

By contrast, personalization goes one step further by anticipating what we want, and offering or providing it before we ask. Using data analytics to gather information, interpret it, and optimize the stay, hotels will be able to offer proactive personalization. That will mean providing a host of new experiences, as well as new and greater value.

 

A seamless, Integrated Experience

 

Once we start opting in to share our information, and allowing the guest to opt in to these kinds of services is critical, hotels will be able to not only deliver what we want, but also when and how we want it. By tapping into the Internet of Things (IoT), our entire journey and stay—from when and how we like to check in to running that videoconference—will be easily and seamlessly integrated into the experience without requiring that we recreate the wheel each visit.

 

Imagine checking in via your smartphone and avoiding the line, with way finding to help you navigate the property. The room is set to your ideal temperature and the TV to your preferred channel. You lay your tablet or laptop down and it immediately begins to charge wirelessly. Using a provided tablet, rather than the stained and cumbersome menu usually found on the desk, you order food in a single click. Your laptop then seamlessly connects with the TV, so you can use the larger screen to chat with family, prepare for your presentation, and then view your movies.

 

And that’s just the beginning. Ease and control will be the watchwords, and the new standards. Using technology and data, hoteliers will be creating what will look like luck or serendipity, but will really be the benefits of a deep understanding of you and your needs.

 

To see an example of what the future will look like, visit the Connected Room prototype Microsoft is unveiling, with Intel’s help, at the HITEC show (booth 2115) June 15-18 in Austin. If you want to read more about what’s coming in retail, take a look at the comprehensive white paper on The Second Era of Digital Retail which I authored last year.

Read more >

Change Your Desktops, Change Your Business. Part 3: Make IT More Effective

In the last two posts in this series, we looked at two issues we’ve all got on our radar: productivity and power savings. They’re both huge targets for today’s businesses because they speak directly to the bottom line. The next topic also translates to real dollars, and that’s IT effectiveness.

 

Now, it goes without saying that we rely on our IT departments to keep us up and running. But for them to be effective, we have to give them the tools they need to get the job done. That starts with making sure people have reliable PCs. Doing so can help IT lower costs and reduce employee downtime, while also giving them the ability to support more systems.

 

So what about those PCs? What are we talking about? The PCs included in the study we’ve been exploring in the last few desktop blog articles relied on new All-in-Ones and Mini PCs, each with the latest Intel® vPro™ technology as well as the newest version of Intel® Active Management Technology (Intel® AMT).1

 

Reduce-Repair-And-Employee-Downtime-With-Remote-KVM.pngThese new systems let IT access the graphical user interface and control the desktops remotely, no matter the power state of the system. With your older fleets, if they had non-operational desktops that were out of band, it meant sending someone to physically fix the system. If you’ve ever had to do that, you know the lost time, productivity, and cost associated. Not ideal.

 

For the study, Keyboard-Video-Mouse (KVM) Remote Control was included in the new systems, but not in the older release of Intel AMT (5.2) that was installed on the aging systems.2 The difference is response time is striking.

 

Here’s the scenario: Imagine one of your employees at a remote site calls into the help desk; her desktop is down. In the old way of doing things, a tech would be dispatched, but probably not until the next day. That results in somewhere in the neighborhood of eight hours of downtime, a painful reality for any business.

 

But the new systems explored in the study, the ones with KVM? Employees waited only 15 seconds for IT to initiate the KVM Remote Control session. Those kinds of savings are also felt in the bottom line. The study revealed that the All-in-One and Mini Desktop would reduce the cost of employee downtime by $215.08 for the 10-minute software repair. That’s a saving of nearly 98 percent.3

 

Plus, don’t forget the savings in time spent by IT. The newer desktops, combined with avoiding that travel time, cut the repair cost by $39.65, leading to a savings of some 85 percent. And the savings go up from there the older your legacy systems are. You don’t even want to know what it’s likely to cost once you’re beyond the warranty.

 

In the final installment in the series on PC refresh, let’s dive into how to actually leverage the newest technology. And don’t forget, you can review the complete study cited above.

Join the conversation using #IntelDesktop.

 

This is the third installment of the “Change Your Desktops, Change Your Business” series in the Desktop World Tech Innovation Series. To view the other posts in the series, click here: Desktop World Series.

 

1. For more information on Intel AMT, visit http://www.intel.com/content/www/us/en/architecture-and-technology/intel-active-management-technology.html?wapkw=amt

2. https://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/default.htm?turl=WordDocuments%2Fkvmandintelamt.htm

Read more >

Smart Ideas for Smarter Homes

 Man-And-Child-Use-The-IoT-At-Home.png

The idea of a smart home – and smart buildings in general – has been a feature of popular science fiction for decades. But while cinemas have shown us visions of apartments that talk back to us and gadgets that are almost part of the family, the real homes we live in have been getting incrementally smarter.


The Domestic IoT


It wasn’t that long ago that Internet-enabled television was a vision of the future. Now it’s the norm. Once the idea of controlling our homes from a single device was far-fetched. Now we walk around with the Internet in our pocket and an app to control our lighting and entertainment systems.


Home automation products that focus on reducing energy consumption have also been around for some years. As sensors and compute modules get smaller and more powerful, so the potential becomes so much greater – and now the possibilities of the Internet of Things (IoT) are upping the game still further.


Comfort and Security


Current platforms are now expanding beyond energy efficiency to include safety and comfort features. For example, Yoga Systems (a member of the Intel Internet of Things Solutions Alliance) is using the Intel IoT Gateway to create an intelligent smart home platform that connects to nearly anything: wired and wireless security detectors, cameras, thermostats, smart plugs, lights, entertainment systems, locks, and appliances.


With most current home automation systems, appliances ‘talk’ (via applications) to residents or building managers who can then take action. By creating a domestic IoT, however, appliances could start talking to each other. So instead of thermostats being adjusted by an app, they can respond to a window being opened or a door unlocked. A cooker hood could switch itself on or off according to heat rising from the hob.

The IoT is all about connecting devices, collecting data and ‘crunching’ it by applying advanced analytics. This analytics capability makes it possible for trends in behaviour to be identified. This creates real potential for example in the area of assisted living, where motion and heat sensors can monitor the activity of vulnerable people and raise rapid alerts in the case of unexpected changes to the routine.


Real Homes. Real Lives


In fact, our homes are where most people are likely to have the most direct interaction with the IoT. It’s all about building intelligence into the fabric of the building. But to get consumers really engaged with the possibilities, the following are crucially important:

  • World-class security and data privacy. This requires hardware and software-level protection to secure data between the home, the cloud, and any mobile devices being used.
  • Interoperability. As with any IoT implementation, there are plenty of players likely to be involved. Consumers want to retain choice about what goes into their homes, so all technologies need to work seamlessly together.
  • Form factors. Building intelligence into the fabric of your home is one thing, turning it into the Millennium Falcon is another. People will want technologies that slot seamlessly and invisibly into their daily lives and domestic environment.
  • Ease of use. Early adopters of home automation systems have tended to be technologically savvy with a general interest in the latest developments. But for widespread adoption, the ‘chic’ factor has to outweigh the geek factor.
  • Scalability. No one wants to rip out and start again when it comes to technology in their homes – making it scalable and flexible is the key.


Visit Intel’s stand at Smart Home World, London, June 23-24 to find out more about the IoT in Smart Homes.

 

Rob Sheppard is IoT Product and Solutions Manager at Intel EMEA.

 

Keep up with him on Twitter (@sheppardi).

 

Check out his other posts on IT Peer Network, or continue the conversation in the comments below.

Read more >

Intel at the Symposia on VLSI Technology and Circuits, June 15 to 19, 2015, Kyoto, Japan

Rob Willoner, Intel’s strategic research manager of the Technology and Manufacturing Group would like to share the following papers that Intel is presenting at the Symposia on VLSI Technology and Circuits this week in Kyoto, Japan.   The papers illustrate Intel’s … Read more >

The post Intel at the Symposia on VLSI Technology and Circuits, June 15 to 19, 2015, Kyoto, Japan appeared first on Technology@Intel.

Read more >

From Exchange to Interoperability

When National Coordinator Karen DeSalvo said, at HIMSS15 in Chicago, that the nation needed “true interoperability, not just exchange,” I imagined the 45,000 or so attendees crying out in unison, “Just exchange?”

 

Measured only since George Bush made EMRs a priority in his 2004 State of the Union address, it has taken our country 11 difficult years to get to a point where 75-80 percent of U.S. hospitals are deploying electronic health records and exchanging a limited set of data points and documents. Especially for smaller hospitals, “just exchange” represents a herculean effort in terms of acquisition, deployment, training and implementation.

 

But adoption of EHRs, meaningful use of the technology, and peer-to-peer exchange of data were never defined as the endpoints of this revolution. They are the foundation on which the next chapter – interoperability — will grow.

 

I asked a former colleague of mine, Joyce Sensmeier, HIMSS Vice President, Informatics, how she distinguished exchange from interoperability.

 

“I think of exchange as getting data from one place to another, as sharing data. Ask a question, get an answer,” she said. “Interoperability implies a many-to-many relationship. It also includes the semantics – what do the data mean? It provides the perspective of context, and to me, it’s going towards integration.”

 

There are other definitions as well. One CIO I correspond with told me he relies on the IEEE definition, which is interesting because ONC uses it too: “the ability of two or more systems or components to exchange information and to use the information that has been exchanged.” And as I was writing this blog, John Halamka beat me to the punch on his blog with a post titled So what is interoperability anyway?

 

His answer: it’s got to be more than “the kind of summaries we’re exchanging today which are often lengthy, missing clinical narrative and hard to incorporate/reconcile with existing records.”

 

Sensmeier notes that interoperability, like exchange, is an evolutionary step on the path to a learning health system. I like that metaphor. As with biological evolution, healthcare IT adaptation is shaped by an ever-changing environment. We shouldn’t expect that every step creates a straight line of progress — some solutions will be better than others. The system as a whole learns, adopts the better practices, and moves forward.

 

(This, incidentally, is why Meaningful Use has proven unpopular among some providers. Although it  emerged from excellent thinking in both the public and private sector, its implementation has taken the command-and-control approach that rewarded — or punished —providers for following processes rather than creating positive change.)

 

Moving from exchange to interoperability, DeSalvo said at HIMSS15, will require “standardized standards,” greater clarity on data security and privacy, and incentives for “interoperability and the appropriate use of health information.”

 

I’ve seen two recent reports that suggest the effort will be worthwhile, even if the payoff is not immediate. A study from Niam Yaraghi of the Brookings Institute found that after more than a decade of work,  “we are on the verge of realizing returns on investments on health IT.” And analysts with Accenture report that healthcare systems saved $6 billion in avoided expense in 2014 thanks to the “increasing ubiquity of health it.” Accenture expects that number to increase to $10 billion this year and $18 billion in 2016.

 

Sensmeier says she expects that reaching “true operability” will occur faster than “just exchange” did.

 

“It won’t happen overnight,” she warns, “but it won’t take as long, either.” With evolving agreement on standards, the infrastructure of HIEs and new payment models, the path to interoperability should be smoother than the one that got us to where we are today.

 

What questions do you have?

Read more >

The Skinny on NVM Express and ESXi

NVMexpress.gifVMware_logo_gry_RGB_small.jpg

 

NVMe Drivers and SSD support in ESXi

 

VMware officially announced support for NVM Express (NVMe) with the release of an asynchronous driver designed for use in ESXi 5.5 in November of 2014.  This driver enables support for PCI Express based Solid-State Drives compatible with the NVM Express 1.0e specification, and is available as a standalone download from VMware (see download links below).  In March of 2015, with the release of vSphere 2015 and ESXi 6.0, VMware now provides an inbox NVMe driver as part of the ESXi 6.0 installation media.

 

Intel has also developed an NVMe driver for ESXi.  VMware’s I/O Vendor Partner Program (IOVP) certified this driver in March of 2015 for use under both ESXi 5.5 and 6.0.  Intel’s goal with this driver is to provide our SSD customers an optimal experience, and maintaining our own NVMe driver under ESXi allows for better supportability along with access to features unique to Intel SSDs.  The Intel-developed NVMe driver only supports Intel NVMe devices, whereas the VMware version claims all NVMe devices (including Intel’s of course).  The installation package for the Intel NVMe driver contains the required vSphere Installation Bundle (VIB) and is binary compatible with both ESXi 5.5 and 6.0, thus the same download covers both ESXi versions.  The Intel driver installation bundle is available on intel.com and vmware.com (see download links below).

 

Along with software support for NVMe in ESXi, VMware introduced a new class of I/O devices in the VMware Compatibility Guide (VCG).  The newly introduced I/O device type is aptly named “NVMe”.  While the VMware NVMe driver will claim all NVMe devices, it’s important to make sure the device itself has also passed IOVP certification. A listing of all NVMe SSDs that are officially supported in ESXi can be found here.  To find an NVMe driver to use with the SSD, simply click on the Model name and download links for an appropriate NVMe driver are displayed.

 

Download Links

NVMe Drivers for ESXi

Driver Creator

ESXi 5.5

ESXi 6.0

Intel

intel.com

vmware.com

intel.com

vmware.com

VMware

vmware.com

N/A (inbox with ESXi 6.0)

 

Other Links of Interest

 

Link

Details

All approved NVMe devices in ESXi

The VMware Compatibility Guide pre-filtered to show all approved NVMe I/O devices.

Approved Intel NVMe devices in ESXi

The VMware Compatibility Guide pre-filtered to show all approved Intel NVMe I/O devices.

Intel® SSD Data Center Family Overview

Provides access to more information on Intel’s NVMe PCIe SSDs.

nvmexpress.org

More information on what NVMe is, why you should consider using it, and news/upcoming events.

 

In summary, if you are running ESXi 5.5 or above, chances are good that there is an NVMe driver and SSD combination that will work for you, allowing you to experience the performance benefits that NVMe SSDs offer.  That being said, it’s always best to cross-check against the VMware Compatibility Guide and ensure that the SSD you want to use is listed, and that you are using it with an approved driver version.

Read more >

Are You Smarter than a Data Centre?

Man-Checking-Out-Data-Center-Servers.pngI don’t know about you, but when I’m tired or jet lagged doing even the simplest thing feels like a chore. Basic tasks like plugging in my phone charger or making a cup of tea suddenly become a Herculean effort as my brain and limbs attempt to talk to each other. When they don’t quite manage it, clumsiness and frustration ensue.

 

Why am I sharing my co-ordinational woes with you? It is relevant, I promise. My point is that the relationship between limbs and brain can be thought of as similar to that between data centre resources and the orchestration layer – the subject of today’s blog. Ideally, the orchestration layer should behave like your brain on a good day, keeping track of your environment and sending the right messages to your extremities – “walk around that pillar or you’ll hurt yourself”; “put the teabag in the mug before the boiling water goes in” – without you consciously thinking about it. It’s automatic and dynamic. When that channel of communication is interrupted, or too slow, that’s when you’re likely to get inefficient use of resources.

 

In the software-defined data centre, orchestration is what will transform your data centre management from a manual chore to a highly automated process. The idea is that it enables you to do more with less, helping to drive time, cost and labour out of your data centre while increasing agility in your journey to the hybrid cloud – the priorities I outlined in my last blog. It sounds like quite an engineering feat, but the process model for orchestration is actually relatively simple.

 

As we’ve seen, software-defined infrastructure’s primary focus is managing resources to ensure business-critical application SLAs are met. So, the application is the starting point for the orchestration layer process, which works as follows:

  • Watch.
  • Decide By analysing  these ongoing observations, the orchestration layer can then draw conclusions about the causes of any sub-optimum levels. For example, is there a power outage somewhere that’s forcing data to be diverted away from the most efficient servers? Once these issues or bottlenecks have been identified, decisions can be made about how to overcome issues.
  • Act. The orchestration layer can then make these changes quickly and automatically, for example by allocating additional compute resource to improve response times, or making more network bandwidth available during peaks in demand. Changes that could have taken weeks, or even months, for a human technician to get to can be reduced to minutes or seconds.

 

There’s one more important step though:

  • Learn. The orchestration layer automatically monitors the impact of any changes it makes in the software-defined infrastructure and uses these insights to improve future decision making.

 

This machine learning, or artificial intelligence (AI), may sound a little farfetched but it’s actually being used in a number of familiar environments today – whenever you’re offered a recommendation on Netflix, use Google voice search, or watch IBM’s Watson win at Jeopardy!, you’re experiencing machine learning in action!

 

In summary then, it’s a self-perpetuating cycle of Watch, Decide, Act, Learn; Watch, Decide, Act, Learn.

 

Making Intelligent Connections

 

I must stress that at Intel we’re not in the business of providing the orchestration layer itself. Our primary role is to enable better orchestration by providing the telemetry and hardware hooks needed by the software to communicate with its data centre resources, like the neurons that carry information (in the form of electrical and chemical signals) to your brain from your hand.

 

Intelligent-Resource-Orchestration.png

    Figure 1. Intelligent Resource Orchestration

 

We’ve a long history of collaborating at a software engineering level with established names in this field – companies such as VMware and Microsoft — to enable them to take full advantage of these features.

 

We also collaborate with the open source community where we’re working in the OpenStack arena, contributing code to close feature gaps and help make it enterprise-ready.

 

It’s still early days, and many end-user organizations working to implement a fully automated orchestration capability across all data centre resources are still in the innovator phase of the adoption curve.

 

However, there is lots of experimentation going on around how better orchestration can help create a more productive and profitable data centre, and I’m sure we’ll be seeing some great progress being made in this sphere over the coming months. Key to the success of any orchestration initiative however, is having the right telemetry in place, and this is what I’ll be looking at in my next blog.

 

Meanwhile, do let me know your thoughts on the potential of automatic and dynamic orchestration — I’d love to hear where you think you could use it to reduce costs whilst boosting agility.

 

You can find my first and second blogs on data centres here:     

Is Your Data Center Ready for the IoT Age?

Have You Got Your Blueprints Ready?

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

Support for OpenStack Kilo and Latest Open vSwitch Boost VM Performance in New Open Network Platform v1.4

By Dana Nehama, Sr. Product Marketing Manager, Network Platforms Group (NPG), Intel

 

I’m very pleased to announce the availability of a new release of the Intel® Open Network Platform – our integrated, open source NFV and SDN infrastructure reference architecture.

 

The latest version (v1.4) now offers improved virtual machine platform and communications performance through the integration of OpenStack Kilo, DPDK, and Open vSwitch among other software ingredients.

 

ONP is the Intel® open source SDN/NFV reference software platform that integrates the main software components necessary for NFV so that Intel Network Builders ecosystem – or any NFV/SDN software developer have access to a reference high-performance NFV infrastructure optimized for Intel Architecture.

 

One of the goals of ONP is to improve the deployability of the software components (such as OpenStack or OpenDaylight) by integrating recent releases and, in that process, addressing feature gaps, fixing bugs, testing the software and contributing development work back to the open source community.

 

The other major goal is to deliver the highest performance software possible, and with v1.4 there is a significant performance improvement for VMs thanks to new features in OpenStack Kilo and Open vSwitch 2.3.90.

 

Kilo Brings Enhanced Platform Awareness

Advancements in Enhanced Platform Awareness (EPA) in OpenStack Kilo will have a significant impact on the scalability of NFV solutions, predictability and improved performance of virtual machines. EPA is composed of several technologies that expose hardware attributes to the NFV orchestration software to improve performance. For example, with CPU pinning a VM process can be “pinned” to a particular CPU core.

 

The Non-Uniform Memory Architecture (NUMA) topology filter is a complementary capability that enables memory resources proximity to the CPU core resulting with lower jitter and latency. I/O aware NUMA scheduling adds the ability to select the optimal socket based on the I/O device requirements. Now, with all those capabilities in place, a VNF can pin a high-performance process to a core and ensure that it is connected locally to the relevant I/O, and has priority access to the highest performing memory. All of this leads to more predictable workload performance and improves providers’ ability to meet SLA.

 

If you’d like more information on EPA, my colleague Adrian Hoban has written a blog post and whitepaper that offers a great exploration of the topic.

 

Improved Virtual Switching

Another enhancement is the integration of Open vSwitch 2.3.90 with Data Plane Development Kit (DPDK) release 1.8 libraries. The addition of DPDK’s packet acceleration technology to the virtual switch through user space improves VM-to-VM data performance. ONP Release 1.4 also adds support for VFIO which secures the user space driver environment more effectively.

 

I would like to take the opportunity and mention the significant milestone of OPNFV Arno release, which represents an important advancement towards NFV acceleration and adoption. Intel ONP contributes innovation with partners into OPNFV and in parallel will continuously consume technologies delivered by OPNFV. For additional information on Arno go to https://networkbuilders.intel.com/onp.

 

I am proud to announce Intel ONP was awarded Most Innovative NFV Product Strategy (Vendor) by Light Reading Leading Lights Awards 2015 on June 8, 2015. The award is given to the technology vendor that has devised the most innovative network functions virtualization (NFV) product strategy during the past year. For more information, please view the announcement here.

 

ONP 1.4 delivers innovation and significant new value to NFV software developers and I encourage all of you to check out this new reference release. The first step is to download our new v1.4 data sheet.

Read more >