ADVISOR DETAILS

RECENT BLOG POSTS

Next-Gen Shopping on Innovation Boulevard

ll.jpg

I don’t know about you, but while I love being able to browse my favourite store’s latest range from the comfort of my sofa, the hands-on experience that I get from a visit to the store itself is also still very appealing. What’s great about today’s retail landscape is that we have the opportunity to do both. The way we try and buy items from our favourite brands is no longer dictated by the opening hours or stock levels in our local high street store.

 

While this is good news for the consumer, the battle is on for high street retailers. To entice as many shoppers as possible through their doors, retailers need to offer a totally unique shopping experience – something that will convince you and me to put down our tablets and head to the high street.

 

Personalized, anytime shopping on the streets of Antwerp

 

Digitopia, a digital retail solution provider in Belgium, is working with Intel to build devices and apps that retailers can use to create more compelling shopping experiences. By trailing different solutions in various retail environments on Antwerp’s most popular shopping street, Digitopia is helping retailers to define which technologies work best in each different store scenario.

 

On Innovation Boulevard, as Digitopia has dubbed it, shoppers can turn their phone into a remote control to browse holidays on a large screen in the travel agent’s window. They can use an interactive fitting room in a fashion boutique to check for alternative colors and sizes of the outfits they are trying on. It’s even possible to order and pay for their cafe refreshments with a smartphone app rather than queuing up in the store. A large number of the solutions are powered by Intel technologies.

 

For shoppers, the retail experience is smoother and more personalized. Importantly, the technologies are also helping retailers to increase sales, offer new services and continue to interact with their customers when the shops are closed.

 

You can read more about the exciting retail experience that Digitopia has created in our new case study. My personal favorite is the possibility to book a holiday while walking between shops – what’s yours?


To continue this conversation, find me on LinkedIn or Twitter.

Read more >

5 Questions for Dr. Giselle Sholler, NMTRC

 

Giselle Sholler is the Chair of the Neuroblastoma and Medulloblastoma Translational Research Consortium (NMTRC) and the Director of the Hayworth Innovative Therapeutic Clinic at Helen DeVos Children’s Hospital. The NMTRC is a group of 15 pediatric hospitals across the U.S, plus the American University in Beirut, Lebanon, and Hospital La Timone in Marseilles, France. We sat down recently with Dr. Sholler to talk about to role of precision medicine in her work and how it impacts patients.


Intel: What are the challenges of pediatric oncology and how do you tackle those challenges?

 

Sholler: As a pediatric oncologist, one of the most challenging times is when we’re faced with a child who is not responding to standard therapy and we want to figure out how we can treat this patient. How can we bring hope to that family? A project that we are working on in collaboration with TGen, Dell and Intel has brought that hope to these families.

 

Intel: What is the program?

 

Sholler: When a child has an incurable pediatric cancer, we a take a needle biopsy and send it to TGen where the DNA and RNA sequencing occurs. When ready, that information comes back to the Consortium. Through a significant amount of analysis of the genomic information, we’re able to look at what drugs might target specific mutations or pathways. On a virtual tumor board, we have 15 hospitals across the U.S. and now two international hospitals in Lebanon and France that come together and discuss the patient’s case with the bioinformatics team from TGen. Everyone is trying to understand that patient and with the help of pharmacists create individualized treatment plans for that patient so that patient can have a therapy available to them that might result in a response for their tumor.

 

Intel: Why is precision medicine important?

 

Sholler: Precision medicine is about using the genomic information data form a patient’s tumor to identify which drugs not only will work, but which ones may not work on that patient’s specific cancer. With precision medicine, we can identify the right treatment for a patient. We’re not saying chemotherapy is bad, but for many of our patients chemotherapy is attacking every rapidly dividing cell and leaves our children with a lot of long term side effects. My hope for the future is that as we can target patients more specifically with the correct medications, we can alleviate some of the side effects that we’re seeing in our patients. Half our children with neuroblastoma have hearing loss and need hearing aids for the rest of their lives. They have heart conditions, kidney conditions, liver conditions that we’d like to see if we can avoid in the future.

 

Intel: How does the collaboration work to speed the process?

 

Sholler: The collaboration with Dell and Intel has been critical to making this entire project possible. The grant from Dell to fund this entire program over the last four years has been unparalleled in pediatric cancer. The computer power has also been vital to the success. Three years ago we were doing only RNA expression profile and it took two months; now, we’re doing RNA sequencing and DNA exomes completely and it takes less than two weeks to get the answers for our patients. The data transfer and networking used to entail shipping hard drives a few years ago. Now, we can send a tumor sample from Lebanon to TGen, complete the sequence in a few days and have a report for the tumor board a few days after that. It’s just been amazing to see the speed and accuracy improve for profiling.

 

Intel: Anything else?

 

Sholler: Another very critical piece that Dell has helped provide is the physician portal. Physicians are able to work together across the country, and across the world, and have access to patient records. The database now has grown and grown. When we do see patients, we can also pull up previous patients with similar sequencing or similar profiles, or treated with similar drugs, and see what was used in treatment. And how did they do? What was the outcome? We’re learning more and more with every patient and it doesn’t matter where we live anymore. Everything’s virtual online. It’s just been incredible.

Read more >

10 Mobile BI Strategy Questions: Business Processes

coworker-collaboration-with-mobile-technology_rz8kse.png

When developing a mobile business intelligence (BI) strategy, you can’t ignore the role that business processes may play. In many cases, the introduction of BI content into the portfolio of mobile BI assets provides opportunities to not only eliminate the gaps in your business operations, but to improve the existing processes.

 

Often, the impact is seen in two main ways. First, the current business processes may require you to change your mobile BI approach. Second, the mobile BI solution may highlight gaps that may require a redesign of your business processes to improve your mobile BI assets and your business operations.

Business Processes Will Influence Your Mobile BI Design

 

Existing business processes will have a direct impact on the design of your mobile BI solution. I’m often amazed to discover that the lack of consideration given to identifying business processes stems not from a lack of insight but from wrong assumptions that are made during the requirements and design phases.

 

It’s true that the business processes may not be impacted if the scope of your mobile BI engagement is limited to mobilizing an existing BI asset (like a report or dashboard) without making any changes to the original end-product, including all underlying logic. But in many cases, the opposite is true—the mobile BI end product may be the driver for change, including the update of the existing BI asset as a result of a mobile BI design.

 

Mobile solutions may require different assumptions in many aspects of their design, which range from source data updates to report layout and logic. Advanced capabilities, such as a write-back option, will further complicate things because the integration systems outside the BI platform will require closer scrutiny and a much closer alignment with business processes.

 

Moreover, constraints that surround source data will have a direct influence on the mobile BI design. For example, if you’re dependent on feeds from external data sources, you may need to consider an additional buffer to take into account possible delays or errors in the data feed. Or, perhaps you have a new application that was just built to collect manually-entered data from field operations. If this new application was introduced as part of your mobile BI solution, the process that governs this data collection system will have a direct impact on your design because of its immediate availability. This wouldn’t have been as important before as an operational tool with a limited audience without mobile BI.

 

Mobile BI Solution May Drive Improvements in Your Business Operations

 

As part of designing your strategy or developing your mobile BI solution, you may discover either gaps or areas for improvement. Don’t worry. This is a known side effect, and it’s often considered a welcome gift because it gives you a chance to kill two birds with one stone: improve your business operations and increase the value of your mobile BI solution. However, it’s critical here to ensure that your team stays focused on the end goal of delivering on time and on schedule (unless the gaps turn out to be major showstoppers).

 

Typical examples are found in the areas of data quality and business rules. The design of a mobile BI asset—especially if it’s new—may highlight new or known data-quality issues. The visibility factor may be different with mobile. Adoption or visibility by executives often may force additional scrutiny. Moreover, adoption rates (ratio of actual users divided by total users of mobile solutions) may be higher because of the availability and convenience with mobile. As a result, mobile users may be less tolerant about the lack of quality assurance (QA) steps.

 

Business rules offer another example due to the same visibility factor. A proposed change in a business rule or process, which previously failed to get attention due to lack of support, may now have more backers when it’s associated with a mobile BI solution. Strong executive sponsorship may influence the outcome.

 

Bottom Line: Do Not Ignore Business Processes

 

It’s easy to make the wrong assumptions when it comes to business processes. It happens not just in mobile BI but in other technology projects. You cannot take existing processes for granted. What may have worked before may not work for mobile BI. Let your business processes complement your overall mobile BI strategy, and let your mobile BI engagement become a conduit for opportunities to improve your operational efficiencies.

 

Not only will these opportunities improve your business operations, but they will lead to increased adoption by increasing the trust your customers/users have in your mobile BI content.

 

What do you see as the biggest challenge when it comes to business processes in your mobile BI strategy?

 

Stay tuned for my next blog in the Mobile BI Strategy series

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >

What is Our Differentiating Definition of Product?

Another Inflection Point


Of all the market transitions hitting the developed world retail industry these days, perhaps the one that will require the greatest industry change – and have the most defining competitive impact – will be the redefinition of product.

 

For a handful of industry leaders, it’s a key component of today’s competitive strategy.

 

For most others – consumed, as they are, by omni-channel integration and digital strategies and mountains of data – it seems to be a bridge too far.

 

At the heart of this issue is an all-too-familiar reality: physical products – at nearly all price points and in nearly all segments – have been commoditized.

 

It’s happened for several reasons. Private label goods offer equal performance at lesser price. Global sourcing enables the immediate copying and delivery (at volume) of hot trends. The internet brings a searing transparency of price and specifications. The quality gaps between good, better and best have been slimmed, even erased.

 

And whether or not multiple retailers have the same brand and SKU, many have the same category . . . and dozens have the same look.

 

The results of this commoditization are seen in average selling prices. In regular-price sell-through percentages. In the depth of markdowns it takes to clear.

 

Overabundance of consumer choice (1).jpg

A retailer can no longer merchandise his or her way through today’s competitive battles.

 

That is, with increasingly commoditized physical SKUs.

 

But there is an alternative: the rise of services in retail and the services-led redefinition of product.

 

As we look ahead, the operative definition of product will be a curated assortment of goods and services.

 

Using data-driven unique insights into customer behavior, merchants will create value through:

 

  • SKU delivery and subscription services – of everything that’s needed regularly, from milk to diapers to the moss control and bark chips I order every March;
  • SKU usage education – seminars, lessons, even tours on topics ranging from fashion advice to consumer electronics to food;
  • Health and family wellness services – and not only for pharmacies, but for grocery and mass merchandising;
  • So-called “federated” services with other brands – not only your winter-in-Florida outfit, but your flight, resort hotel and starred-restaurant reservations;
  • Home management services – ranging from care to repair.

 

Some services will be a means of locking in user loyalty. Others will create new revenue streams.

 

And it will be through this value-added approach to retailing that brands will survive and ultimately thrive.

 

It’s no surprise that Amazon has already figured this out. Case in point: Amazon Prime. This is a stunning success.

 

In 2013, Prime’s renewal rate was a remarkable 82%.1 In the fourth quarter of 2014, Prime had 40 million US members. A report released in January by Consumer Intelligence Research Partners found that Prime members spend, on average, $1,500 on Amazon, compared to $625 per year for non-members. Prime members also shop 50% more frequently than non-members.2

 

How does Amazon Prime bind shoppers to its brand so effectively? At the heart are the services that bind shoppers to the brand. The best example I know is their automatic deliveries of diapers in the right size as a baby grows. Think of it. No more late-night runs to the store.

Shopping app photo- Twin Design- Shutterstock (1).jpg

 

And read that again: no late-night runs to the store.


Brilliant.

 

OK, so what does this mean to the technology community? Why should the digerati care?

 

First of all, this service creation thing is not going to be easy. Shaping the offer is not going to be easy. Monetizing is not going to be easy.

 

It’s going to require deep, unique, tested insight into shopper behavior. Into your brand’s cohorts and personas. Into finding the leading indicators of need and demand.

 

At the foundation of this is Big Data. And moving well beyond Big Data. Into the data analysis worlds inhabited by the leaders.

 

Second of all, the delivery of the content that will enable the delivery of services will not be easy. This is going to be about enterprise architecture and data architecture and APIs that open data to the outside world and APIs that are accessed to bring the outside world inside.

 

And third of all, the staffing and training and delivering services will not be easy. Those who deliver services – and this will be a people business – will be on the go. Not tethered to an aisle or a department or a check stand.

 

The business processes of delivery will no doubt need a highly advanced level of mobile access to information and ease of use.

 

The redefinition of product? Quite honestly, it’s a redefinition of retail.

 

Get ready. It’s coming.

 

 

 

 

1 Forbes, 2014, Kantar Research 2014.


2 Consumer Intelligence Research Partners, January 2015.


*Other Names and brands may be claimed as the property of others.

Read more >

Tech in the Real World: My Dentist’s Portable All-in-One

Dentist-AllInOnePC-Xrays.png

Back in 1995, when I first started going to Wood Family Dentistry for dental care, they tracked patients with paper charts, took film x-rays, and documented exams and treatments manually. But one thing I’ve noticed in the 20 years that I’ve been Dr. Wood’s patient is his intense curiosity and desire to use technology to continually improve the level of care he provides at his Folsom, California-based practice.


Fast-forward to today, and their patient workflows are completely digital, they can instantly view high-definition digital x-rays, and there’s not a paper record in sight. Keith Wood, DDS and his staff haven’t stopped with those innovations, however. With the help of a portable All-in-One PC, they’ve streamlined and advanced patient care even further.

 

Convenience and comfort in the dental chair

 

In the exam room, the portable All-in-One’s large, mobile touch screen eliminates the need for patients to crane their necks to see images on the wall-mounted monitor. Now, Dr. Wood shows patients highly detailed digital x-rays and other images in the comfort of the exam chair.

 

“With the portable All-in-One, I put it right in their lap and touch, zoom, and really bring things to life,” he explained.

 

Dr. Wood also told me how having a single device that they can use anywhere in the office provides them with a tremendous convenience boost. Not only does it make it easy to access charts and information anywhere in the building, but instead of needing to make room for parents when their kids are in the exam room, the dental team can now bring the portable All-in-One to the waiting room and more conveniently discuss treatment plans.

 

Doctor-Patient-Chart-Mobile.png

Performance that proves itself

 

Dr. Wood was initially skeptical that a portable device could handle the large images and demanding applications that they use, but the performance and responsiveness of their Dell XPS 18 with Intel Core i7 processor has really impressed him and his staff. It gives them rapid access to patient files, the ability to run multiple dental applications at full speed, and the flexibility to input information with touch or keyboard and mouse.

 

“It’s super-easy to use,” Registered Dental Assistant Carry Ann Countryman reported. “You can get from chart, to x-rays, to documents super-fast.”

 

Foundation for the future

 

In addition, their portable All-in-One gives them a solid technology foundation for enabling other new technologies in their practice. They currently are exploring imaging wands that connect to the device to provide fast, 3-D dental images for dental molds. And they’re excited about possibility of adding hands-free gesture controls powered by Intel RealSense Technology sometime in the near future.


Curious how portable All-in-Ones or other Intel-based devices could change how you work? Visit: www.intel.com/businessdesktops

Read more >

Cloud For All – Reaching New Heights with Mirantis

Last month, Diane Bryant announced the creation of the Cloud for All Initiative, an effort to drive the creation of 10’s of thousands of new clouds across enterprise and provider data centers and deliver the efficiency and agility of hyperscale to the masses. This initiative took another major step forward today with the announcement of an investment and technology collaboration with Mirantis.  This collaboration extends Intel’s existing engagement with Mirantis with a single goal in mind: delivery of OpenStack fully optimized for the enterprise to spur broad adoption.

 

We hear a lot about OpenStack being ready for the enterprise, and in many cases OpenStack has provided incredible value to clouds running in enterprise data centers today. However, when talking to the IT managers who have led these deployment efforts, a few key topics arise: it’s too complex, its features don’t easily support traditional enterprise applications, and it took some time to optimize for deployment.  While IT organizations have benefitted from the added effort of deployment, the industry can do better.  This is why Intel is working with Mirantis to tune OpenStack for feature optimization, and while this work extends from network infrastructure optimization to storage tuning and beyond, there are a few common themes of the work.

 

data-center-graphic-code.jpg

The first focus is on increasing stack resiliency for traditional enterprise application orchestration.  Why is this important?  While enterprises have begun to deploy cloud native applications within their environments, business is still very much run on what we call “traditional” applications, those that were written without the notion that some day they would exist in a cloud.  These traditional applications require increased level of reliability, uptime during rolling software upgrades and maintenance, and control of underlying infrastructure across compute, storage and network.

 

The second focus is on increasing stack performance through full optimization of Intel Architecture. Working closely with Mirantis will ensure that OpenStack will be fully tuned to take advantage of platform telemetry and platform technologies such as Intel VT and Cloud Integrity Technology to deliver improved performance and security capabilities.

 

The final focus is on improving full data center resource pool optimization with improvements targeted specifically at software defined storage and network resource pool integration. We’ll work to ensure that applications have full control of all the resources required while ensuring efficient resource utilization.

 

The fruits of the collaboration will be integrated into Mirantis’ distribution as well as offered as upstream contributions for the benefit of the entire community.  We also expect to utilize the OpenStack Innovation Center recently announced by Intel and Rackspace to test these features at scale to ensure that data centers of any size can benefit from this work.  Our ultimate goal is delivery of a choice of optimized solutions to the marketplace for use by enterprise and providers, and you can expect frequent updates on the progress from the Intel team as we move forward with this collaboration.

Read more >

New Intel Network Builders Fast Track Igniting Network Transformation with the Intel ONP Reference Architecture

Today at IDF 2015, Sandra Rivera, Vice President and GM of Intel’s Network Platforms Group, disclosed the Intel® Network Builders Fast Track program in her joint keynote “5G: Innovation from Client to Cloud.”  The mission of the program is to accelerate and broaden the availability of proven commercial solutions through a key combination of means such as equity investments, blueprint publications, performance optimizations, and multi-party interoperability testing via 3rd party labs.

 

 

This program was specifically designed to help address many of the biggest challenges that the industry faces today with one goal in mind – accelerate the network transformation to software defined networking (SDN) and network functions virtualization (NFV).

 

Thanks to the new Intel Network Builders Fast Track, Intel® Open Network Platform (ONP) is poised to have an even bigger impact in how we collaborate with end-users and supply chain partners to deliver proven SDN and NFV solutions together.

 

Intel ONP is a reference architecture that combines leading open source software and standards ingredients together on a quarterly release that can be used by developers to create optimized commercial solutions for SDN and NFV workloads and use cases.

 

Whereas the Intel Network Builders Fast Track combines market development activities, technical enabling, and equity investments to accelerate time to market (TTM) for Intel Network Builder partners, the Intel ONP then amplifies this with a reference architecture. With Intel ONP, partners can get to market more quickly with solutions based on open industry leading building blocks that are optimized for industry-leading performance on Intel Xeon® processor-based servers.

 

Intel ONP Release 1.4 includes the following software for example:

 

  • OpenStack* Kilo 2015.1 release with the following key feature enhancements:
    • Enhanced Platform Awareness (EPA) capabilities
    • Improved CPU pinning to virtual machines
    • I/O based Non-Uniform Memory Architecture (NUMA) aware scheduling
  • OpenDaylight* Helium-SR3
  • Open vSwitch* 2.3.90
  • Data Plane Development Kit release 1.8
  • Fedora* 21 release
  • Real-Time Linux* Kernel, patches release 3.14.36-rt34

 

We’ll be releasing ONP 1.5 in mid September. However there’s even more exciting news just beyond release 1.5.

 

Strategically aligned with OPNFV for Telecom

 

As previously announced this week at IDF, the Intel ONP 2.0 reference architecture scheduled for early next year will adopt and be fully aligned with the OPNFV Arno• software components released in June this year.  With well over 50 members, OPNFV is an industry leading open source community committed to collaborating on a carrier-grade, integrated, open source platform to accelerate the introduction of new NFV solutions.  Intel is a platinum member of OPNFV dedicated to partnering within the community to solve real challenges in key barriers to adoption such as packet processing performance, service function chaining, service assurance, security, and high availability just to name some.  Intel ONP 2.0 will also deliver support for new products such as the Intel® Xeon® Processor D, our latest SOC as well as showcase new workloads such as Gi-LAN.  This marks a major milestone for Intel to align ONP with OPNFV architecturally and to contribute to the OPNFV program on a whole new level.

 

The impact of the Network Builders Fast Track will be significant. The combination of the Intel Network Builder Fast Track and the Intel ONP reference architecture will mean even faster time to market, a broader range of industry interoperability, and market-leading commercial solutions to fuel SDN and NFV growth in the marketplace.

 

Whether you are a service provider or enterprise looking to deploy a new SDN solution, or a partner in the supply chain developing the next generation solution for NFV, I encourage you to join us on this journey with both Intel Network Builder Fast Track and Intel ONP as we transform the network together.

Read more >

Healthcare Breaches from Loss or Theft of Mobile Devices or Media

Health and Human Services Breaches Affecting 500 or More Individuals website shows that there were 97 breaches of this type involving 500 or more patients in 2014, and 46 breaches of this type so far in 2015. These breaches often occur when there are a sequence of failures. An example is show in the graphic below where the first failure is a lack of effective healthcare worker security awareness training.

 

A mobile device they are using either lacks encryption, or the employee has the password on or near the device, for example on a sticky note on the laptop screen, which shockingly is not uncommon. This is followed by the employee leaving the mobile device vulnerable, whether on the backseat of a car, on a desk unsecured, in a coffee shop, or other location vulnerable to loss or theft of the device. This leads to loss or theft of the mobile device containing sensitive data in the form of electronic health records, and ultimately can lead to breach.

infographic DH.png

 

The HIPAA Breach Notification Rule requires notification of HHS, patients, and media for HIPAA Covered Entities and Business Associates operating in the US. The vast majority of US states now also enforce state level security breach notification laws which also cover sensitive healthcare information. If the number of records compromised is 500 or more this can lead to a new entry in the HHS “Wall of Shame”. The Ponemon 2015 Cost of a Data Breach Study reports that the average per patient cost of a data breach was $398, the highest across all industries. Based on the number of patient records compromised this can easily result in a total average healthcare organization business impact of $6.5 million, and an abnormal churn rate of 6 percent. Clearly this staggering cost means it is imperative for all healthcare organizations and business associates to take a proactive approach to securing themselves.

 

This has propelled breaches to a top concern across all healthcare organizations, often even a higher priority than regulatory compliance, which is seen as a minimal requirement but not sufficient to adequately reduce risk of breaches.

 

The above infographic presents a healthcare breaches maturity model. As such, it is focused on healthcare, and breach risks. A holistic approach is required for effective risk mitigation, including administrative, physical and technical safeguards. This maturity model is focused on technical safeguards for healthcare breaches. Gray blocks are applicable for other types of healthcare breaches, but not so much for breaches resulting from loss or theft of mobile device or media. We will discuss these other types of breaches more in future blogs. Here we focus more on the colored capability blocks of the security model, representing safeguards that help mitigate risk of breach from loss or theft of mobile devices or media.

 

A baseline level of technical safeguards for basic mitigation of healthcare breaches from loss or theft of mobile devices requires:

 

  • Endpoint Device Encryption to protect the confidentiality of sensitive data
  • Mobile Device Management, to provide a secure managed container for healthcare apps and sensitive data
  • At least single factor “what you know” / username and password access control which his usually provided at both the OS and enterprise application levels

 

An enhanced level of technical safeguards for further improved mitigation of risk of this type of healthcare breach requires addition of:

  • Anti-Theft enables the ability to remotely locate, lock or wipe your device in the event of loss or theft
  • Client SSD (Solid State Drive) with Encryption automatically encrypts all files stored on the client device to protect their confidentiality
  • MFA (Multi-Factor Authentication) with Timeout strengthens the authentication or login with the device, and automatically times out and locks the device after some period of inactivity
  • Secure Remote Administration enables system administrators to remotely access the device to diagnose and remediate issues and can be used to keep the device secure and healthy for effective security
  • Policy Based File Encryption can automatically encrypt files on a mobile device based on their type and contents, as well as the policy of the healthcare organization, in order to protect confidentiality
  • Server DB (Database) Backup Encryption encrypts files on the server, including databases and backups. Although loss or theft of servers and backups is more rare than loss / theft of a mobile device, when it does occur it can be much more impactful to the business due to more data and patient records stored on the server

 

An advanced level of security for further mitigation of risk of this type of breach adds:

  • MFA with Walk-Away Lock which further reduces the possibility of a hijacked session by detecting when the authenticated user has left the device and automatically locking the device
  • Server SSD with Encryption automatically encrypts files stored on the server to protect their confidentiality in the event of loss or theft of the server
  • Digital Forensics enables the healthcare organization to rapidly determine if a lost or stolen device was accessed and if so what specific sensitive data was accessed. This can be important in determining if a breach actually occurred, and if so the specific patients involved. The business impact of the breach is proportional to the number of patient records compromised so this can be an important strategy to avoid or minimize business impact from a breach.

 

The reality is most healthcare organizations don’t lack ideas for what security they could add. However, budget and resources are always finite. Security is also complex. The maturity model above presents a way to address the top concern of breaches from loss or theft of mobile devices or media in three increments. Using this method an organization may choose to implement the baseline level of security in year one, add enhanced security in year two, and complete the security by adding advanced security in year three.

 

What questions do you have?

Read more >

Cloud For All – Reaching New Heights with Mirantis

Last month, Diane Bryant announced the creation of the Cloud for All Initiative, an effort to drive the creation of 10’s of thousands of new clouds across enterprise and provider data centers and deliver the efficiency and agility of hyperscale to the masses. This initiative took another major step forward today with the announcement of an investment and technology collaboration with Mirantis.  This collaboration extends Intel’s existing engagement with Mirantis with a single goal in mind: delivery of OpenStack fully optimized for the enterprise to spur broad adoption. 

 

We hear a lot about OpenStack being ready for the enterprise, and in many cases OpenStack has provided incredible value to clouds running in enterprise data centers today. However, when talking to the IT managers who have led these deployment efforts, a few key topics arise: it’s too complex, its features don’t easily support traditional enterprise applications, and it took some time to optimize for deployment.  While IT organizations have benefitted from the added effort of deployment, the industry can do better.  This is why Intel is working with Mirantis to tune OpenStack for feature optimization, and while this work extends from network infrastructure optimization to storage tuning and beyond, there are a few common themes of the work.

 

The first focus is on increasing stack resiliency for traditional enterprise application orchestration.  Why is this important?  While enterprises have begun to deploy cloud native applications within their environments, business is still very much run on what we call “traditional” applications, those that were written without the notion that some day they would exist in a cloud.  These traditional applications require increased level of reliability, uptime during rolling software upgrades and maintenance, and control of underlying infrastructure across compute, storage and network.

 

The second focus is on increasing stack performance through full optimization of Intel Architecture. Working closely with Mirantis will ensure that OpenStack will be fully tuned to take advantage of platform telemetry and platform technologies such as Intel VT and Cloud Integrity Technology to deliver improved performance and security capabilities.

 

The final focus is on improving full data center resource pool optimization with improvements targeted specifically at software defined storage and network resource pool integration. We’ll work to ensure that applications have full control of all the resources required while ensuring efficient resource utilization.

 

The fruits of the collaboration will be integrated into Mirantis’ distribution as well as offered as upstream contributions for the benefit of the entire community.  We also expect to utilize the OpenStack Innovation Center recently announced by Intel and Rackspace to test these features at scale to ensure that data centers of any size can benefit from this work.  Our ultimate goal is delivery of a choice of optimized solutions to the marketplace for use by enterprise and providers, and you can expect frequent updates on the progress from the Intel team as we move forward with this collaboration.

Read more >

Bringing Closed Loop Automation to Healthcare

One of the topics I hear frequently from the health IT community is about barriers to innovation. From my perspective, closed loop automation is a huge issue that we face and will have to deal with. We clearly allow closed loop automation in other parts of our lives, yet somehow we have this reverence and reluctance to do it in healthcare. Why?

 

Everyone I have ever run across in the healthcare industry—from my previous role as a doctor to the role in technology—is dedicated to goodness, kindness, and supporting their patients. Yet the process is so complicated we inadvertently, systematically hurt people over and over again. The only way to cure this is to automate the automatable.

 

And just what is automatable? It’s a moving target, but here’s a start:

 

  • Respirator settings: We’ve talked about very simple things like automating respirator settings. Why should I as a doctor, since I have an output in mind, monitor the physiology of a patient in a stable manner? Algorithms, through experience, could do this a whole lot better than a junior doctor. I want to use the power of the most senior doctor built into the algorithm and teach the respirator to be as smart as possible and then actually learn with individual physiologic feedback and how it responds to that patient to maintain a parameter.

 

  • IV pumps: As with respirators, we could do the same with IV pumps. The IV pumps would have Ethernet or wireless connections that can talk to the electronic medical records that can talk to lab data. Why not have the pump start to deliver a drug like heparin? In this scenario, a nurse can’t make a mistake and a doctor can’t inadvertently write the wrong order. By the 80/20 rule, we’ll default to the average most of the time, anyway. Let machines help us where they can.

 

The benefits of closed loop automation are many, but freeing doctors and nurses from mundane tasks that are repeatable would be a game changer. That’s one of the biggest alterations we can make towards improving the delivery of care worldwide.

 

Maybe it’s a big transition, but we need to trust the machines. They can do a really good job at certain things. I’m not asking the machines to think for us; but where things follow well developed patterns allowing that process to occur makes sense. Naturally, there will be resistance from those who see automation as a threat to job security. It has happened in other industries where automation replaces human activity. That’s to be expected.

 

But at the end of the day, a robot can paint a car better than a human can. A robot can be better at welding. There are things that closed loop automation can do better in healthcare and we need to give it a try.

 

What do you think? How would closed loop automation be viewed in your facility?

Read more >

Data is the New Currency of the Digital Service Economy: My 5 Takeaways from Diane Bryant & Doug Davis IDF Mega Session

City-wide traffic visualization. Global shipping data. Airplane traffic patterns. Worldwide Facebook* connections. A stunning video highlighting the current deluge of data as both the world’s most abundant and most underutilized asset kicked off Doug Davis (SVP and GM, Internet of Things Group) and Diane Bryant’s (SVP and GM, Data Center Group) mega session on IoT and Big Data Insights at IDF. They spent their session time highlighting how vital it is that we enable the easy extraction of information from data, as that will allow for disruption across a variety of industries including transportation, energy, retail, agriculture, and healthcare.

 

Takeaway #1: Data doesn’t just disrupt the digital world

 

Even industries – like agriculture – that have been around for thousands of years are ripe for cutting-edge technology transformation. Jesse Vollmar, the Co-Founder and CEO of FarmLogs, joined Diane and Doug to talk about using sensor networks and agricultural robots to make it easier for farmers to make land more productive. By capturing data on everything from fertilization to pesticides to weed control, sensors are capturing massive amounts of data to help farmers make better decisions about their crops.

 

idf-blog-photo-1.jpgJesse Vollmar from FarmLogs

 

Takeaway #2 The edge is full of new innovation opportunity.  Even Beyoncé is in play.

 

Edge analytics may seem daunting to traditional enterprises with little experience in BI. To show ease of implementation, Doug brought out a team of Intel interns who were able to program industrial robots in three weeks to pick up gesture control via Intel® RealSenseTM technology. The robots danced to popular tunes, while an on-stage intern controlled their movements. Nothing like hearing a little “Single Ladies” at IDF. To help get started, the Intel® IoT Developer Program has expanded to include commercial solutions, enabling a fast, flexible and scalable path to IoT edge analytics.

 

idf-blog-photo-2.jpgIntel intern and a gesture-controlled robot

 

So what do we need to develop in IoT to see an impact across a full range of industries? We need more sensors and more robots who are connected to each other and connected to the cloud. Think about what we could accomplish if a robot was connected to a cloud of Intel® Xeon® processors as its brain. The goal is to enable robots that are smart and connected and that will gather info around them, with access to databases, as well as predictive analytics. All resulting in a fluid and natural interaction with the world. To get to this future vision, we need increased computing power, better data analytics, and more security.

 

In a world where extracting information from data is value, the data center becomes the brains behind IoT devices. According to Diane, the number one barrier to enterprise data analytics is making sense of the data. Solutions need to need to be usable by existing IT talent, allow for rapid customization, and enable an accelerated pace of innovation.

 

Takeaway #3 You may have a mountain of data but need to extract the Gold through analytics

 

Diane brought out Dennis Weng from JD.com to discuss how the company used Streaming SQL on an Intel® Xeon® processor based platform to develop streaming analytics for customers based on browsing and purchase history. They’re handling 100 million customers and 4 million categories of products. The company reduced their TCO and development now takes hours instead of weeks.

 

According to Owen Zhang, the top-ranked data scientist on KAGGLE*, the ideal analytics platform will feature easy customization with access to different kinds of data, have an intuitive interface, and run at scale. Intel is committed to reaching that goal – Diane announced release of Discovery Peak, an open-source, standards-based platform that is easy to use and highly customizable.

 

idf-blog-photo-3.jpgOwen Zhang, a data scientist super hero

 

Takeaway #4 Analytics isn’t just about software.  Hardware innovation is critical

 

Another revolutionary innovation supporting in-memory database computing is Intel® 3D XPointTM technology. First coming to SSDs in 2016, this new class of memory will also make its way to a future Intel® Xeon® processor based platform in the form of DIMMs. Representing the first time non-volatile memory will be used in main memory, 3D XPoint technology will offer a 4x increase in memory capacity (up to 6TB of data on a two socket system) and is significantly lower in cost per GB relative to DRAM.

idf-blog-photo-4.jpgA giant Intel® 3D XPointTM technology grid in the showcase

 

Takeaway #5 Sometimes technology has the promise to change the world.

 

And finally, Eric Dishman (Intel Fellow and GM of Health & Life Sciences) and Dr. Brian Druker from Oregon Health and Science University joined Diane and Doug for a deep dive into the future of analytics and healthcare. Governments around the world are working towards improving the cost, quality, and access to healthcare for all (government goal). The goal is precision medicine – distributed and personalized care for each individual, or “All in a Day” medicine by 2020. We’ve been working on that goal with OHSU, and other organizations, for a number of years and just announced another large step forward.

 

idf-blog-photo-5.jpgDr. Brian Druker from Oregon Health and Science University

 

The Collaborative Cancer Cloud is a precision medicine analytics platform that allows institutions to securely share patient genomic, imaging, and clinical data for potentially lifesaving discoveries. It will enable large amounts of data from sites all around the world to be analyzed in a distributed way, while preserving the privacy and security of that patient data at each site.

 

The data analytics opportunities across markets and industries are endless. What will you take away from your data?

Read more >

SDI Paves the Way for Analytics Workloads

In a series of earlier posts, we took a trip down the road to software-defined infrastructure (SDI). Now that we have established an understanding of SDI and where it is today, it’s a good time to talk about the workloads that will run on the SDI foundation. This is where SDI demonstrates it true value.

 

Much of this post will assume that your code is developed to be cloud-aware (and you understand what those changes are to help). Cloud-aware apps know what they need to do to fully leverage the automation and orchestration capabilities of an SDI platform. They are written to enable expansion and contraction automatically and to maintain optimal levels of performance, availability, and efficiency. (If you want some additional discussion around cloud-aware, just let me know. It’s another topic that’s close to my heart.)

 

With cloud aware taken care of, one key workloads targeting the SDI landing zone is business analytics, which is getting a lot of press today as it rises in importance to the enterprise. Analytics is the vehicle for turning mountains of raw data into meaningful business insights. It takes you from transitions to trends, from customer complaints to sentiment analysis, and from millions of rows of log data to hackers’ intent.

 

Analytics, of course, is not new. Virtually all IT shops have leveraged some form of analytics for years, from simple reporting presented in Excel spreadsheets to more complex data analysis and visualization. What is new is a whole set of technologies that allow for doing things differently, using new data and merging these capabilities. For example, we now have tools and environments, such as Hadoop, that make it possible to bring together structured and unstructured data in an automated manner, which was really difficult to do in an automated way. Over the next few blogs, I will talk about how analytics is changing and how companies might progress through the analytics world in a stepwise manner. For now, let’s begin with the current state of analytics.

 

Today, most organizations have a business intelligence environment. Typically, this is a very reactive and very batch dependent environment. In a common progression, organizations move data from online data sources into a data warehouse through various transformations, and then they run reports or create cubes to determine impact.

 

In these environments, latency between initial event and actual action tends to be very high. By the time data is extracted, transformed, loaded, and analyzed, its relevance has decreased and the associated costs continue to rise. In general, there is the cost of holding data and the cost of converting that data from data store to data warehouse, and these can be very high. It should be no surprise then that decisions on how far you can go back and how much additional data you can use are often are made based on the cost of the environment, rather than the value to the business.

 

The future of this environment is that new foundation technologies—such as Hadoop, in-memory databases NoSQL and graph databases with the use of advanced algorithms, and machine learning—will change the landscape of analytics dramatically.

 

These advances, which are now well under way, will allow us to get to a world in which analytics and orchestration tools do a lot of hard work for us. When an event happens, the analytics environment will determine what actions would best handle the issue and optimize the outcome. It will also trigger the change and lets someone know why something changed … all automatically and without human intervention.

 

While this might be scary for some, it is rapidly becoming a capability that can be leveraged. It is in use today on trading floors, for example, to determine if events are illegal or to trigger specific trades. The financial industry is where much of the innovation around these items is taking place.

 

It is only a matter of time where most companies will be able to figure out how to take advantage of these same fundamental technologies to change their businesses.

 

Another item to keep in mind is as organizations make greater use of analytics, visualization will become even more important. Why? Because a simple spreadsheet and graph will not be able to explain what is happening in a way humans will be able to understand. This is where we start to see the inclusion of what has existed in the high performance computing areas for years around modeling and simulation. These visualizations will help companies pick that needle out of a data haystack in a way that helps them optimize profits, go after new business, and win new customers.

 

In follow-on posts, I will explore the path forward in the journey to the widespread use of analytics in an SDI environment. This is a path that moves first from reactive to predictive analytics, and then from the predictive to the prescriptive.  I will also explore a great use case—security—for organizations getting started with analytics.

Read more >

Enabling data analytics success with optimized machine learning tools

Last year I read an article in which Hadoop co-developers Mike Cafarella and Doug Cutting explained how they originally set out to build an open-source search engine. They saw it as serving a specific need to process massive amounts of data from the Internet, and they were surprised to find so much pent up demand for this kind of computing across all businesses. The article suggested it was a happy coincidence.

 

I see it more as a happy intersection of computing, storage and networking technology with business needs to use a growing supply of data more efficiently. Most of us know that Intel has developed much of the hardware technology that enables what we’ve come to call Big Data, but Intel is working hard to make the algorithms that run on Intel systems as efficient as possible, too. My colleague Pradeep Dubey presented a session this week at the Intel Developer Forum in San Francisco on how developers can take advantage of optimized data analytics and machine learning algorithms on Intel® Architecture-based data center platforms. In this blog I thought I would back up a bit and explain how this came about and why it’s so important.

 

The explosion of data available on the Internet has driven market needs for new ways to collect, process, and analyze it. In the past, companies mostly processed the data they created in the course of doing business. That data could be massive. For example, in 2012 it was estimated that Walmart collected data from more than one million customer transactions per hour. But it was mostly structured data that is relatively well behaved. Today the Internet offers up enormous amounts of mostly unstructured data, and the Internet of Things promises yet another surge. What businesses seek now goes beyond business intelligence. They seek business insight, which is intelligence applied.

 

What makes the new data different isn’t just that there’s so much of it, but that an estimated 80 percent of it is unstructured—comprised of text, images, and audio that defies confinement to the rows and columns of traditional databases. It also defies attempts to tame it with traditional analytics because it needs to be interpreted before it can be used in predictive algorithms.  Humans just can’t process data efficiently or consistently enough to analyze all this unstructured data, so the burden of extracting meaning from it lands on the computers in the data center.

 

First, let’s understand this burden a little deeper. A key element of the approach I described above is machine learning. We ask the machine to actually learn from the data, to develop models that represent this learning, and to use the models to make predictions or decisions. There are many machine learning techniques that enable this, but they all have two things in common: They require a lot of computing horsepower and they are complex for the programmer to implement in a way that uses data center resources efficiently. So our approach at Intel is two-fold:

 

  • Optimize the Intel ® Xeon processor and the Intel® Xeon Phi™ coprocessor hardware to handle the key parts of machine learning algorithms very efficiently.

  • Make these optimizations readily available to developers through libraries and applications that take advantage of the capabilities of the hardware using standard programming languages and familiar programming models.

 

Intel Xeon Phi enhances parallelism and provides a specialized instruction set to implement key data analytics functions in the hardware. To access those capabilities, we provide an array of supporting software like the Intel Data Analytics Accelerator Library, a set of optimized building blocks that can be used in all stages of the data analytics workflow; the Intel Math Kernel Library, math processing routines that increase application performance on Xeon processors and reduce development time; and the Intel Analytics Toolkit for Apache Hadoop that lets data scientists focus on analytics instead of mastering the details of programming for Hadoop and myriad open source tools.

 

Furthermore, like the developers of Hadoop itself, we believe it’s important to foster a community around data analytics tools that engages experts from all quarters to make them better and easier to use. We think distributing these tools freely makes them more accessible and speeds progress across the whole field, so we rely on the open source model to empower the data analytics ecosystem we are creating around Intel Xeon systems. That’s not new for Intel; we’re already a top contributor to open source programs like Linux and Spark, and to Hadoop through our partnership with Cloudera. And that is definitely not a coincidence. Intel recognizes that open source brings talent and investment together to create solutions that people can build on rather than a bunch of competing solutions that diffuse the efforts of developers. Cafarella says it’s what made Hadoop so successful—and it’s the best way we’ve found to make Intel customers successful, too.

Read more >

Developers building exciting new services use disruptive technologies

Preventing data loss in servers has been an objective since the invention of the database. A tiny software or hardware glitch that causes a power interruption can result in lost data, potentially interrupting services and, in the worst case, costing millions of dollars. So database developers have been searching for ways to achieve high transaction throughput and persistent in-memory data.

 

The industry took a tentative step with power-protected volatile DIMMS. In the event of a server power failure, the Power Protected DIMM activates its own small power supply, enabling it to flush volatile data to a non-volatile media.  This feature, referred to as Asynchronous DRAM Refresh (ADR) is limited and quite proprietary.  Nevertheless, the power protected DIMM became a concrete device for architects to consider improvements to a persistent memory software model.

 

To build the software model, the Storage and Networking Industry Association (SNIA.org) assembled some of the best minds in the storage and memory industries into a working group.  Starting in 2012, they developed ideas of how applications and operating systems could ensure in-memory data was persistent on the server.  They considered not only the power protected DIMMs, but also how emerging technologies like resistive RAM memories, could fit into the model.  Approved and published in 2013, the SNIA Persistent Memory Programming Model 1.0 became the first open architecture that allowed application developers to begin broad server enabling for persistent memory.

 

isv-libraries-blog-image.png

 

NVM.PM.VOLUME and NVM.PM.FILE mode examples

This graphic from the Storage Networking Industry Association shows examples of the programming model for a new generation of non-volatile memory.

 

Further impetus to program to the model emerged in late July 2015 when Intel and Micron announced they have started production on a new class of non-volatile memory that is the first new memory category in more than 25 years. Introduced as 3D XPoint™ technology, this new class of NVM has the potential to revolutionize database, big data, high-performance computing, virtualization, storage, cloud, gaming, and many other applications.

 

3D XPoint (pronounced “three-D-cross-point”), promises non-volatile memory speeds up to 1,000 times faster  than NAND, today’s most popular non-volatile memory. It accomplishes this performance feat by putting large amounts of quickly accessible data close to the processor, where it can be accessed at speeds previously impossible for non-volatile storage.

 

The new 3D XPoint technology is the foundation for Intel MDIMMs, announced at Intel Developer Forum in August.  These DIMMs will deliver higher up to 4X the system memory capacity than today’s servers, at a much more affordable price than DRAM. The result will be NVM DIMMs that can be widely adopted.

 

Of course, technology alone doesn’t deliver benefits to end users. Applications have to be written to take advantage this disruptive technology.  Building off the SNIA persistent memory programming model, open source developers have converted Linux file systems to be persistent memory aware, and integrated those new capabilities into Linux 4.0 upstream kernel.

 

Adding to the enabling effort, Intel and open source developers have been creating a Non-Volatile Memory Library (NVML) for Linux software developers. NVML enables developers to accelerate application development for persistent memory, based on the open SNIA persistent memory programming model.

 

It’s safe to say that developers will find this open source library to be extremely valuable. It hides a lot of the programming complexity and management details that can slow the development process, while optimizing instructions for better performance.

 

The five libraries in the NVML set will enable a wide range of developers to capitalize on 3D XPoint technology—and push applications into an all-new performance dimension.

 

Here’s the bottom line: 3D XPoint technology is coming soon to an Intel data center platform near you. If you’re a software developer, now is a good time to get up to speed on this new technology. With that thought in mind, here are a few steps you can take to prepare yourself for the coming performance revolution brought by a breakthrough technology.

 

Learn about the Persistent Memory programming model.

 

Read the documents and code supporting ACPI 6.0 and Linux NFIT drivers

http://www.uefi.org/sites/default/files/resources/ACPI_6.0.pdf

https://git.kernel.org/cgit/linux/kernel/git/djbw/nvdimm.git/log/?h=nd

https://github.com/pmem/ndctl

http://pmem.io/documents/

https://github.com/01org/prd

 

Learn about the non-volatile memory library (NVML) and subscribe to mailing list.

 

Explore the Intel Architecture Instruction Set Extensions Programming Reference.

 

And if your application needs access to a large tier of memory but doesn’t need data persistence in memory, there’s an NVM library there for you too.

 

We’ll discuss more on Big Data, Java and 3D XPointTM in a future blog post.

 

1 Performance difference based on comparison between 3D XPoint technology and other industry NAND.

Read more >

Stumped by the Internet of (Too Many) Things?

Here’s an interesting disconnect: 84 percent of C-suite executives believe that the Internet of Things (IoT) will create new sources of revenue. However, only 7 percent have committed to an IoT investment.[1] Why the gap between belief and action? Perhaps it’s because of the number of zeroes. Welcome to the world of overwhelming numbers: billions of things connecting to millions of sensors with 1.6 trillion dollars at stake.[2] What does a billion look or feel like, much less a trillion? If you’re like me, it’s difficult to relate to such large-scale numbers. So it’s not surprising that many companies are taking a wait-and-see approach. They will wait for the dust to settle—and for the numbers to become less abstract—before taking action. Analysts make some big claims, and it can feel like IoT promises the world. But many businesses both large and small aren’t ready to invest in a brand new world, even if they believe that IoT can deliver on its promise. However, the same businesses that are wary of large promises could use connected things today to make small changes that might significantly impact profitability. For example, changes in the way your users conduct meetings could dramatically improve efficiency. Imagine a routine meeting that is assisted by fully connected sensors, apps, and devices. These connected things, forming a simple IoT solution, could anticipate your needs and do simple things for you to save time. They could reserve the conference room, dim the lights, adjust the temperature, and send notes to meeting attendees. That’s why we here at Intel are so excited to partner with Citrix Octoblu. Designed with the mission to connect anything to everything, Octoblu offers a way for your business to take advantage of IoT today, even before all your things are connected. Octoblu provides software and APIs that automate interactions across smart devices, wearables, sensors, and many other things. Intel brings Intel IoT Gateways to that mix, which are pretested and optimized hardware platforms built specifically with IoT security in mind. The proven and trusted Intel reputation in the hardware industry, combined with Octoblu, a noted pioneer in IoT, can help address concerns about security and complexity as companies look at the possibilities for connected things. IoT is shaping up to be more than just hype. Check out a new infographic that shows small, practical ways you can benefit from IoT today. Or read the Solution Brief to learn more about how the Intel and Citrix partnership can help you navigate the uncharted territory surrounding IoT. [1] Accenture survey. “CEO Briefing 2015: From Productivity to Outcomes. Using the Internet of Things to Drive Future Business Strategies.” 2015. Written in collaboration with The Economist Intelligence Unit (EIU). https://www.accenture.com/t20150708T060455__w__/ke-en/_acnmedia/Accenture/Conversion-Assets/DotCom/Documents/Global/PDF/Dualpub_7/Accenture-CEO-Briefing-2015-Productivity-Outcomes-Internet-Things.pdf. [2] McKinsey Global Institute. “Unlocking the Potential of the Internet of Things.” June 2015. http://www.mckinsey.com/insights/business_technology/the_internet_of_things_the_value_of_digitizing_the_physic al_world.

Read more >

Mobile is Vital to Healthcare Strategy

 

In the above clip, Bill Muth, a solution architect at CDW, explains how strategies for CIOs need to compliment an organization’s mission and usually focus on one of three areas: cost, differentiation, and focus. He says mobility is vital to a good health IT strategy.

 

Watch the video and let me know what questions you have. How did you develop your mobile health IT strategy?

Read more >

Pushing Machine Learning to a New Level with the Intel Xeon Processor and Intel Xeon Phi Coprocessor

By Pradeep K Dubey, Intel Fellow and Fellow of IEEE; Director, Parallel Computing Lab

 

 

Traditionally, there has been a balance of intelligence between computers and humans where all forms of number crunching and bit manipulations are left to computers, and the intelligent decision-making is left to us humans.  We are now at the cusp of a major transformation poised to disrupt this balance. There are two triggers for this: first, trillions of connected devices (the “Internet of Things”) converting the large untapped analog world around us to a digital world, and second, (thanks to Moore’s Law) the availability of beyond-exaflop levels of compute, making a large class of inferencing and decision-making problems now computationally tractable.

 

This leads to a new level of applications and services in form of “Machine Intelligence Led Services”.  These services will be distinguished by machines being in the ‘lead’ for tasks that were traditionally human-led, simply because computer-led implementations will reach and even surpass the best human-led quality metrics.  Self-driving cars, where literally machines have taken the front seat, or IBM’s Watson machine winning the game of Jeopardy is just the tip of the iceberg in terms of what is computationally feasible now.  This extends the reach of computing to largely untapped sectors of modern society: health, education, farming and transportation, all of which are often operating well below the desired levels of efficiency.

 

At the heart of this enablement is a class of algorithms generally known as machine learning. Machine learning was most concisely and precisely defined by Prof. Tom Mitchell of CMU almost two decades back as, “A computer program learns, if its performance improves with experience”.  Or alternately, “Machine Learning is the study, development, and application of algorithms that improve their performance at some task based on experience (previous iterations).”   Its human-like nature is apparent in its definition itself.

 

The theory of machine learning is not new; its potential however has largely been unrealized due to the absence of the vast amounts of data needed to take machine performance to useful levels.  All of this has now changed with the explosion of available data, making machine learning one of the most active areas of emerging algorithm research. Our research group, the Parallel Computing Lab, part of Intel Labs, has been at the forefront of such research.  We seek to be an industry role-model for application-driven architectural research. We work in close collaboration with leading academic and industry co-travelers to understand architectural implications—hardware and software—for Intel’s upcoming multicore/many-core compute platforms.

 

At the Intel Developer Forum this week, I summarized our progress and findings.  Specifically, I shared our analysis and optimization work with respect to core functions of machine learning for Intel architectures.  We observe that the majority of today’s publicly available machine learning code delivers sub-optimal compute performance. The reasons for this include the complexity of these algorithms, their rapidly evolving nature, and a general lack of parallelism-awareness. This, in turn, has led to a myth that industry standard CPUs can’t achieve the performance required for machine learning algorithms. However, we can “bust” this myth with optimized code, or code modernization to use another term, to demonstrate the CPU performance and productivity benefits.

 

Our optimized code running on Intel’s family of latest Xeon processors delivers significantly higher performance (often more than two orders of magnitude) over corresponding best-published performance figures to date on the same processing platform.  Our optimizations for core machine learning functions such as K-means based clustering, collaborative filtering, logistic regression, support vector machine training, and deep learning classification and training achieve high levels of architectural, cost and energy efficiency.

 

In most cases, our achieved performance also exceeds best-published-to-date compute performance of special-purpose offload accelerators like GPUs. These accelerators, being special-purpose, often have significantly higher peak flops and bandwidth than our general-purpose processors. They also require significant software engineering efforts to isolate and offload parts of computations, through their own programming model and tool chain. In contrast to this, Intel® Xeon® processor and upcoming Intel® Xeon Phi™ coprocessor (codename Knights Landing) each offer common, non-offload-based, general-purpose processing platforms for parallel and highly parallel application segments respectively.

 

A single-socket Knights Landing system is expected to deliver over 2.5X the performance of a dual socket Intel Xeon Processor E5 v3 family processor based system (E5-2697v3; Haswell) as measured by images per second using the popular AlexNet neural network topology.  Arguably, the most complex computational task in machine learning today is scaling state-of-the art deep neural network topologies to large distributed systems. For this challenging task, using 64 nodes of Knights Landing, we expect to train the OverFeat-FAST topology (trained to 80% classification accuracy in 70 epochs using synchronous minibatch SGD) in a mere 3-4 hours.  This represents more than a 2x improvement over the same sized two socket Intel Xeon processor E5-2697 v3 based Intel® Endeavour cluster result.

 

More importantly, the coding and optimization techniques employed here deliver optimal performance for both Intel Xeon processors and Intel Xeon Phi coprocessors, both at the single-node, as well as multi-node level.  This is possible due to their shared programming model and architecture.  This preserves the software investment the industry has made in Intel Xeon, and hence reduces TCO for data center operators.

 

Perhaps more importantly, we are making these performance optimizations available to our developers through the familiar Intel-architecture tool chain, specifically through enhancements over the coming couple of quarters to the Intel® Math Kernel Library (MKL) and Data Analytics Acceleration Library (DAAL).  This significantly lowers the software barrier for developers while delivering highly performant, efficient, and portable implementations.

 

Let us together grow the use of machine learning and analytics to turn big data into deep insights and prescriptive analytics – getting machines to reason and prescribe a course of action in real-time for a smart and connected world of tomorrow, and extend the benefit of Moore’s Law to new application sectors of our society.

 

For further information click here to view the full presentation or visit www.intel.com/idfsessionsSF and search SPCS008.

Read more >

Retail: Transform through Cloud Capabilities

cloud for retail webinar.JPG

I recently was part of a webinar hosted by CDW focusing on Cloud for Retail.  I had the privilege to be a part of the panel highlighting the benefits and trends for retailers.  The panel was comprised of Shane Zide from CDW, @ShaneZide; George Bentinck from Cisco Meraki, @Meraki_se; and Chip Epps from Onelogin.  Considering the impact cloud is having on retail, the 60 minute webinar highlighted just a few significant trends.  None-the-less it provided a solid approach for those attending.  I lead by highlighting current trends which I feel are impacting retailers the most… financial flexibility and time to capabilities.  The connected, empowered and informed consumer has exerted significant pressure on the retail business model.  Today’s business model must be nimble and flexible – capable of delivering on the brands promise.  The advantages of utilizing a cloud strategy will positively impact retailers’ business models. Specifically by enabling:

  1. Greater Financial freedom by moving CAPEX obligations (data center investments) to OPEX budgeting (cloud hosted non-essential applications like HR. suite) – by doing so a retailer may be able to re-invest funding into more engaging brand experiences.
  2. A more nimble approach to brick & mortar store fronts – ultimately redefining the purpose and size of the store to meet the opportunity. For instance, the ability to extend your brand to new venues (festivals, bowl games, or locations airports or urban settings)
  3. Increased productivity for the sales assistant – who will be connected to the right information at the point of influence, on the store floor. By utilizing cloud based apps the rep can become a sales advisor and will know more about the products, merchandise and services the consumer is interested in and where they exist in the supply chain.
  4. A greater customer experience – we know the shopper is connected and retailers must deliver an experience that matches expectations. It must be engaging across the Omni-channel

At the end of the day, retailers will have more flexibility to scale stores up and down based on the demands of the operating environment if they consider how to integrate cloud solutions.  Remember cloud is not a destination, just a tool.  It is a tool to improve upon the brand experience, engaging the shopper throughout their journey, to reduce cost and to become more nimble.  The good folks in CDW Retail and their extensive partners are extremely knowledgeable and offer Cloud Consulting Services.  Take them up on it and prepare for the future.  Additional Resources from CDW Cloud Readiness.

Find me on Linked-in

Follow me on Twitter

Read more >

Intel’s New Innovation Engine Enables Differentiated Firmware

Historically, platform embedded firmware limits the ways system-builders can customize, innovate, and differentiate their offerings. Today, Intel is streamlining the route for implementing new features with the creation of an “open engine” for system-builders to run firmware of their own creation or choosing.

 

This important advance in platform architecture is known as the Innovation Engine. It was introduced this week at the Intel Developer Forum in San Francisco.

 

The Innovation Engine is a small Intel® architecture processor and I/O sub-system that will be embedded into future Intel data center platforms. The Innovation Engine enables system builders to create their own unique, differentiating firmware for server, storage, and networking markets. 

 

Some possible uses include hosting lightweight manageability features in order to reduce overall system cost, improving server performance by offloading BIOS and BMC routines, or augmenting the Intel® Management Engine for such things as telemetry and trusted boot.

 

These are just a few of the countless possibilities for the use of this new path into the heart of Intel processors. Truthfully, the uses for the Innovation Engine are limited only by the feature’s capability framework and the developer’s imagination.

 

It’s worth noting that the Innovation Engine is reserved for system-builder’s code, and not Intel firmware. Intel supplies only the hardware, and the system builder can tailor things from there. And as for security, the Innovation Engine code is cryptographically bound to the system-builder. Code not authenticated by the system-builder will not load.

 

As the name suggests, the Innovation Engine will drive a lot of great benefits for OEMs and, ultimately, end users. This embedded core in future Intel processors will foster creativity, innovation, and differentiation, while creating a simplified path for system-builders implementing new features and enabling full customer visibility into code and engine behavior.

 

Ultimately, this upcoming enhancement in Intel data center platforms is all about using Intel technology advancements to drive widespread innovation in the data center ecosystem.

 

Have thoughts you’d like to share? Pass them along on Twitter via @IntelITCenter, you can also take a listen to our IDF podcasts for more on the Innovation Engine.

Read more >

Network Transformation: Innovating on the Path to 5G

network-transformation-blog-banner.jpg

 

Close your eyes and try to count the number of times you’ve connected with computing today.  Hard to do? We have all witnessed this fundamental change: Computing has moved from a productivity tool to an essential part of ourselves, something that shapes the way we live, work, and engage in community.

 

Now, imagine how many times today you’ve thought about the network connectivity making all of these experiences possible.  Unless you’re like me, someone who is professionally invested in network innovation, the answer is probably close to zero.  But all of those essential experiences delivered every day to all of us would not exist without an amazing array of networking technologies working in concert.

 

In this light, the network is everything you can’t see, but you can’t live without.  And without serious innovation of the network, all of the amazing computing innovations expected in the next few years simply won’t be able to be experienced in the way they were intended.

 

At the Intel Developer Forum today, I had the pleasure of sharing the stage with my colleague Aicha Evans and industry leaders from SK Telecom, Verizon, and Ericsson, as we shared Intel’s vision for 5G networks from device to data center.  In this post, I’d like to share a few imperatives to deliver the agile and performant networks required to fuel the next wave of innovation.  IDF was the perfect place to share this message given that it all starts with the power of community: developers from across the industry working together to deliver impactful change.

 

So what’s changing? Think about the connectivity problems we experience today: dropped calls, constant buffering of streaming video, or downloading delays. Imagine if not only those problems disappeared, but new immersive experiences like 3D virtual reality gaming, real-time telemedicine, and augmented reality became pervasive in our everyday lives? In 5G, we believe it will.

 

5G is, of course, the next major upgrade to cellular connectivity and represents improved performance but even more importantly, massive increases in the intelligence and flexibility of the network. One innovation in this area is Mobile Edge Computing (MEC).  To imagine the mobile edge, imagine cell tower base stations embedded with cloud computing based intelligence, or “cloudlets”, creating the opportunity for network operators to deliver high performance, low latency services like the ones I shared above.

 

As networks become more intelligent, the services that run on them become more intelligent too. MEC will provide the computing power to also deliver Service Aware Networks, which will dynamically process and prioritize traffic based on service type and application. As a result, operators gain more control, developers are more easily able to innovate new personalized services, and users gain higher quality of experience.

 

Another exciting innovation is Anchor-Booster technology, which takes advantage of the principles of Software Defined Networking (SDN). It allows devices to take better advantage of spectrum like millimeter wave to boost network throughput by 10X or more.

 

These technologies may seem futuristic, but Intel has already been working with the industry for several years to use cloud technology to reinvent the network similar to how it reinvented the data center. We call this network transformation, and it represents moving from fixed function, purpose built network infrastructure to adaptable networks based on Network Functions Virtualization (NFV) and SDN.  Within this model, network functions now reside within virtual machines or software containers, managed by centralized controllers and orchestrators, and dynamically provisioned to meet the needs of the network.  The change that this represents to the communication service provider industry is massive. NFV & SDN are dramatically changing the rate of innovation of communications networking and creating enormous opportunities for the industry to deliver new services at cloud pace.

 

Our work is well underway.  As a key pillar of our efforts, we established the Intel Network Builders program two years ago at IDF, and since its inception it has grown to over 170 industry leaders, including strategic end users, working together towards solution optimization, trials, and dozens of early commercial deployments.

 

And today, I was excited to announce the next step towards network transformation with the Intel® Network Builders Fast Track, a new investment and collaboration initiative to ignite solution stack delivery, integrate proven solutions through blueprint publications, and optimize solution performance and interoperability through new third party labs and interoperability centers. These programs were specifically designed to address the most critical challenges facing broad deployment of virtualized network solutions and are already being met with enthusiasm and engagement by our Intel Network Builders members, helping us all towards delivery of a host of new solutions for the market.  If you’re engaged in the networking arena as a developer of solutions or a provider, I encourage you to engage with us as we transform the network together.

 

Imagine: No more dropped calls, no more buffered video. Just essential experiences delivered in the manner intended, and exciting new experiences to further enrich the way we live and work. The delivery of this invisible imperative just became much clearer.

Read more >