Recent Blog Posts

Making Integrated Care Work – IDC Pan-European Healthcare Executive Summit 2015

The theme of convergence and making integrated care work resonated throughout the opening day of IDC’s Pan-European Healthcare Executive Summit in Dublin. It’s fantastic to see how much collective drive there is amongst the healthcare community to collaborate and be more flexible to achieve this paradigm shift which will help to deliver innovative, safer and sustainable care.


Major Healthcare Challenges Today

As the foundation sponsor keynote speaker I thought it was important to set the scene to understand the challenges that lie ahead if we are to truly push forward with a more integrated system of healthcare delivery. And I wanted to share that with you here too. I see 4 major issues in global health today:

  • Ageing Population1 – 2bn people over 60 years old by 2050
  • Medical Inflation2 – 50% increase in worldwide health costs by 2020
  • Consumerism3– Increasingly engaged patients via apps, device, wearables, etc
  • Worker Shortage4 – 4.3m global shortfall of doctors and nurses today


All of these issues are interconnected, for example, an ageing population highlights the need for us to robustly tackle chronic conditions, such as diabetes, respiratory disease and dementia, which are soaking up healthcare resources. I’ve talked previously of how the changing narrative of care can help to reduce healthcare costs but it’s integration and collaboration across the entire healthcare ecosystem that will accelerate change.


The Foundations to Deliver Precision Medicine

Technology can help us to move towards a pervasive, continuous notion of healthcare by interconnecting all of the players which deliver health and well-being to the population. Think not just of primary care, but of community/home care too, throw lifestyle and environment into the mix alongside omic profiling and we begin to create the foundations to deliver precision medicine at the bedside.


I think we’d all agree that the quality of life of a patient is enhanced when they enjoy independent healthy living – it’s also more cost-effective for healthcare providers too. Integrated care means that the patient benefits from a fully-joined up approach from providers, care is seamless so that time in hospital is kept to a minimum and patients and carers are armed with the right support to prevent readmissions.


Innovation Today

The obvious example (and one where some countries such as Sweden are really forging ahead) is easily accessible Electronic Medical Records which can be updated and shared by caregivers across a range of settings to ensure the most up-to-date clinical information is available at the right place and at the right time, but I’m also seeing some fantastic innovations around how the Internet of Things is benefiting both patient and provider too. This is not about future-gazing, this is about prevention rather than cure, using the technology we have available today to join the dots where it has simply been too difficult, costly or, in some cases, impossible to do until now.


Managing Complex Healthcare Ecosystem

I’m always keen to emphasise that the really, really hard stuff is in fact the soft stuff. We have brilliant engineers here at Intel who are doing incredible things to move healthcare forward, but it’s changing the perceptions and realities of the players within the healthcare ecosystem that is the big challenge. We must accept that every player should be interconnected, that includes the patient, the payer, the device-maker and the researcher – no single piece of this hugely complex jigsaw should be operating in isolation if we want to collectively reduce costs and better manage those chronic diseases. Business models are changing and relationships are changing, they have to, so it’s great to see that conversation playing out so positively here in Dublin this week.



1United Nations, Population Ageing and Development 2009

2Bain & company, 2011. From “The Great Eight: 20 Trillion Growth Trends to 2020.”

3Worker Shortage: World Health Organization, 2007

4Inefficiency and Poor Patient Experience: The Institute of Medicine, “Better Care at Lower Cost”

Read more >

Cybersecurity will Improve with Predictive Models

Prediction capabilities can have tremendous value in the world of security.  It allows for better allocation of resources.  Instead of trying to defend everything from all types of attacks, it allows a smarter positioning of preventative, detective, and responsive investments to intersect where the attacks are likely to occur. 


There is a natural progression in security maturity.  First, organizations invest in preventative measures to stop the impacts of attacks.  Quickly they realize not all attacks are being stopped, so they invest in detective mechanisms to identify when an attack successfully bypasses the preventative controls.  Armed with alerts of incursions, response capabilities must be established to quickly interdict to minimize the losses and guide the environment back to a normal state of operation.  All these resources are important but must potentially cover a vast electronic and human ecosystem.  It simply becomes too large to demand every square inch be equally protected, updated, monitored and made recoverable.  The amount of resources would be untenable.  The epic failure of the Maginot Line is a great historic example of ineffective overspending. 

Strategic Cybersecurity Capability Process v2.jpg

Prioritization is what is needed to properly align security resources to where they are the most advantageous.  Part of the process is to understand which assets are valuable, but also which are being targeted.  As it turns out, the best strategy is not about protecting everything from every possible attack.  Rather it is focusing on protecting those important resources which are most likely to be attacked.  This is where predictive modeling comes into play.  It is all part of a strategic cybersecurity capability.


“He who defends everything, defends nothing” – Fredrick the Great


In short, being able to predict where the most likely attacks will occur, provides an advantage in the allocation of security resources for the maximum effect.  The right predictive model can be a force-multiplier in adversarial confrontations.  Many organizations are designed around the venerable Prevent/Detect/Recover model (or something similar).  The descriptions get changed a bit over the years, but the premise remains the same as a 3-part introspective defensive structure.  However, the very best organizations apply analytics and intelligence to include specific aspects of attacker’s methods and objectives for Predictive capabilities.  This completes the circular process with a continuous feedback loop to help optimize all the other areas.  Without it, Prevention attempts to block all possible attacks.  Detection and Response struggle to do the same for the entirety of their domains.  It is just not efficient, therefore not sustainable over time.  With good Predictive capabilities, Prevention can focus on the most likely or riskiest attacks.  Same for Detection and Response.  Overall, it aligns the security posture to best resist the threats it faces.


There are many different types of predictive models.  Some are actuary-learning models, baseline-anomaly analysis, and my favorite is threat intelligence.  One is not uniformly better than the others.  Each have strengths and weaknesses.  The real world has thousands of years of experience with such models.  The practice has been applied to warfare, politics, insurance, and a multitude of other areas.  Strategists have great use for such capabilities in understanding the best path forward in a shifting environment.


Actuary learning models are heavily used in the insurance industry, with prediction based upon historical averages of events.  Baseline anomaly analysis is leveraged in technology, research, and finance fields to identify outliers in expected performance and time-to-failure.  Threat agent intelligence, knowing your adversary, is strongly applied in warfare and adversarial situations where an intelligent attacker exists.  The digital security industry is just coming into a state of awareness where they see the potential and value.  Historically, such models suffered from a lack of data quantity and timeliness.  The digital world has both in abundance.  So much in fact, the quantity is a problem to manage.  But computer security has a different challenge, in the rapid advances of technology which leads to a staggering diversity in the avenues of which the attackers can exploit.  Environmental stability is a key success-criteria attribute to the accuracy of all such models.  It becomes very difficult to maintain a comprehensive analysis in a chaotic environment where very little remains consistent.  This is where the power of computing can help offset the complications and apply these concepts to the benefit of cybersecurity.


There is a reality which must first be addressed.  Predictive systems are best suited for environments which already have established a solid infrastructure and baseline capabilities.  The maturity state of most organizations have not yet evolved to a condition where an investment in predictive analytics is right for them.  You can’t run before you walk.  Many companies are still struggling with the basics of security and good hygiene (understanding their environment, closing the big attack vectors/vulnerabilities, effective training, regulatory compliance, data management, metrics, etc.).  For them, it is better to establish the basics before venturing into enhancement techniques.  But for those who are more advanced, capable, and stable, the next logical step may be to optimize the use of their security resources with predictive insights.  Although a small number of companies are ready and some are travelling down this path, I think over time, Managed Security Service Provider’s will lead the broader charge for wide-spread and cross-vertical market adoption. MSSP’s are in a great position to both establish the basics and implement predictive models across the breadth of their clients.


When it comes to building and configuring predictive threat tools, which tap into vast amounts of data, many hold to the belief that data scientists should be leading the programs to understand and locate obscure but relevant indicators leading to threats.  I disagree.  Data scientists are important in manipulating data and programming the design for search parameters, but they are not experts in understanding what is meaningful and what the systems should be looking for.  As such, they tend to get mired in the correlation-causation circular assumptions.  What can emerge are trends which are statistically interesting, yet do not actually have relevance or are in some cases misleading.  As an example, most law enforcement do NOT use the data correlation methods for crime prediction as it can lead to ‘profiling’ and then self-fulfilling prophecies.  The models they use are carefully defined by crime experts, not the data scientists.  Non-experts simply lack the knowledge of what to look for and why it might be important.  It is really the experienced security/law-enforcement professional which knows what to consider and therefore should lead the configuration aspects of the design.  With security expert’s insights and the data scientist’s ability to manipulate data, the right analytical search structures can be established.  So it must be a partnership between those who know what to look for (expert) and those who can manipulate the tools to find it (data scientist).


Expert systems can be tremendously valuable, but also a huge sink of time and resources.  Most successful models do their best when analyzing simple environments with a reasonable number of factors and a high degree of overall stability.  The models for international politics, asymmetric warfare attacks, serial killer profiling, etc. are far from precise.  But the value of being able to predict computer security issues is incredibly valuable and appears attainable.  Although much work and learning has still yet to be accomplished, the data and processing is there to support the exercise.  I think the cybersecurity domain might be a very good environment for such systems to eventually thrive to deliver better risk management, at scale for lower cost, and improve the overall experience of their beneficiaries.



Twitter: @Matt_Rosenquist

Intel Network: My Previous Posts


Read more >

Talking Allowed: How Workplace Collaboration Is Better with Technology


Despite the fact that email is a part of daily life (it’s officially middle aged at 40+ years old), for many of us nothing beats the art of real human conversation. So it’s exciting to see how Intel vPro technology can transform the way we do business simply by giving many people what they want and need: the ability to have instant, natural discussions.

In the first of our Business Devices Webinar Series, we focused on the use of technology to increase workplace productivity. This concept, while not new, is now easily within reach thanks to ever more efficient conferencing solutions that include simple ways to connect, display, and collaborate. From wireless docking that makes incompatible wires and cumbersome dongles a thing of the past, to more secure, multifactor authentication on Wi-Fi for seamless login, to scalability across multiple devices and form factors, more robust communications solutions are suddenly possible.


Intel IT experts Corey Morris and Scott McWilliams shared how these improvements are transforming business across the enterprise. One great example of this productivity in action is at media and marketing services company Meredith Corporation. As the publisher of numerous magazines (including Better Homes & Garden and Martha Stewart Living), and owner of numerous local TV stations across the United States, Meredith needed to keep its 4,500 employees connected, especially in remote locations. Dan Danes, SCCM manager for Meredith, said Intel vPro technology helped boost the company’s IT infrastructure while also reducing worker downtime.


In the 30-minute interactive chat following the presentation, intrigued webinar attendees peppered the speakers with questions. Here are a few highlights from that conversation:


Q: Is this an enterprise-only [technology]? Or will a small business be able to leverage this?

A: Both enterprise and small business can support vPro.


Q: Is there also a vPro [tool] available like TeamViewer, in which the user can press a Help button, get a code over a firewall, and connect?

A: There is MeshCentral, which is similar, but TeamViewer and LogMeIn do not have vPro OOB [out-of-band] capabilities.


Q: How do I get started? How do I contact a vPro expert?

A: You can go to to be connected with a vPro expert.

These interactive chats, an ongoing feature of our four-part webinar series, also offer an opportunity for each participant to win a cool prize just for asking a question. Congratulations to our first webinar winners: James Davis scored a new Dell Venue 8 Pro tablet and Joe Petzold is receiving SMS Audio BioSport smart earbuds with an integrated heart monitor.


Sound interesting? We hope you’ll join us for the second webinar, which will further explore how companies can reduce the total cost of ownership of a PC refresh via remote IT management. If you’ve already registered for the Business Devices Webinar Series, click on the link in the email reminder that you’ll receive a day or two before the event, and tune in October 15 at 10 a.m. PDT. If you want to register, you can do it here.


In the meantime, you can watch the Boost Business Productivity webinar here. Learn how you can boost your business productivity and get great offers at Or, if you want to try before you buy, check out the Battle Pack from Insights.

Read more >

Improving Brain Research Worldwide through the Intel® Modern Code Developer Challenge

One of the ways we take our stewardship of Moore’s Law seriously at Intel is by sponsoring contests and challenges for the developer community. With the help of partners, we offer developers incentives and platforms that spur the advancement of … Read more >

The post Improving Brain Research Worldwide through the Intel® Modern Code Developer Challenge appeared first on Intel Software and Services.

Read more >

Intel® Chip Chat Podcast Round-up: Third Quarter 2015

Throughout the third quarter of 2015 we had many excellent guests and topics including the launch of the Intel® Xeon® processor E7 v3, the launch of the Intel® Xeon® Processor E3-1200 v4 Product Family with integrated Intel® Iris™ Pro Graphics, several interviews from the OpenStack Summit in Vancouver B.C. including Margaret Chiosi from AT&T, Ken Won from HP and Curt Aubley from Intel. We also got to chat with Intel’s Das Kamhout about the new Intel Cloud for All Initiative, discuss OpenStack in Latin America with KIO Networks’ Francisco Araya, talk Big Data and Analytics with Intel’s Alan Ross, and meet many other great guests.


If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!


Intel® Chip Chat:


In this livecast from the Intel Developer Forum (IDF) in San Francisco Mike Ferron Jones, Director of Datacenter Platform Technology Marketing and Greg Matson, Director of SSD Strategic Planning and Product Marketing at Intel discuss the announcement of Intel® Optane™ technology based on Intel’s new 3D XPoint™ technology. They outlined how 3D XPoint technology is an entirely new class of nonvolatile memory that will revolutionize storage enabling high-speed, high-capacity data storage close to the processor. This technology will be made available as SSDs (solid state drives) called Intel Optane technology, as well as in a DIMM (dual in-line memory module) form factor which will open up new possibilities of types of workloads and applications that you can accelerate or take to whole new levels of big memory applications. Greg emphasizes that Intel will be making Optane SSDs available for servers, enthusiast clients, and laptops within 2016.


In this livecast from the Intel Developer Forum (IDF) in San Francisco John Leung, Software & System Architect at Intel and Jeff Autor, Distinguished Technologist with the Servers Business Unit at HP stop by to discuss the release of RedFish 1.0. They highlight how on August 4th 2015 the DMTF (Distributed Management Task Force Inc) announced the availability of RedFish version 1.0, an adopted and approved industry standard interface which simplifies the management of scalable compute platforms, and is extensible beyond compute platforms. John and Jeff emphasize how RedFish 1.0 is a great example of what can be accomplished when technology leaders, along the supply chain, truly listen to requests and feedback of end-users, and come together satisfy those request in an open and broad manner.


Jim Blakley, Visual Cloud Computing General Manager at Intel chats about how Intel is driving innovation in visual cloud computing. He talks about the International Broadcasting Conference (IBC) and announces the launch of the Intel® Visual Compute Accelerator, the new Intel® Xeon® E3 – based PCIe add-in card that brings media and graphics capabilities into the Intel Xeon processor E5-based servers. Jim outlines how the Intel Visual Compute Accelerator Card enables real time transcoding specifically targeting AVC and HEVC workloads and reduces the amount of storage and network bandwidth needed to deliver the transcoded video streams. He also highlights several partners that will be demoing Intel technology or in the Intel booth at IBC including; Thomson Video Networks, Envivio, Kontron, Artesyn, and Vantrix. To learn more, follow Jim on Twitter


In this livecast from the Intel Developer Forum (IDF) in San Francisco Das Kamhout, Principal Engineer and SDI Architect at Intel discusses the Intel Cloud for All Initiative and how Intel is working to enable tens of thousands of new clouds for a variety of usage models across the world. Das illuminates a concept he covered in his IDF session contrasting more traditional types of cloud infrastructure with a new model of cloud based upon the use of containers and the ability to run an application by scheduling processes across a data center. He explains how container based cloud architectures can create a highly efficient delivery of services within the data center by abstracting the infrastructure and allowing application developers to be more flexible. Das also highlights how Intel is investing in broad industry collaborations to create enterprise ready, easy to deploy SDI solutions. To learn more, follow Das on Twitter at


In this livecast from the Intel Developer Forum in San Francisco Curt Aubley, VP and CTO of Intel’s Data Center Group stops by to talk about some of the top trends that he sees in data center technology today. Curt emphasizes how the fundamental shift in capabilities in the data center is enabling businesses to create an incredible competitive differentiator when they take advantage of emerging technologies. He brings up how new technologies like Intel’s 3D XPoint™ are creating an amazing impact upon real time analytics and calls out how dynamic resource pooling is helping to drive a transformation in the network and enable the adoption of software defined networking (SDN) and network functions virtualization (NFV) to remove networking performance bottlenecks. Curt highlights many other data center technology trends from rack scale architecture (RSA) and intelligent orchestration to cutting edge security technologies like Intel® Cloud Integrity Technology.


Caroline Chan, Director of Wireless Access Strategy and Technology at Intel stops by to talk about the shift to 5G within the mobile industry and how the industry will need to solve more challenges than just making the cellular network faster to make this shift possible. She stresses that there needs to be a focus on an end to end system that will enable the communications and computing worlds to merge together to create a more efficient network and better business model overall. Caroline also discusses possible upcoming 5G related showcases that will happen in Asia within the next 3-7 years and how Intel is collaborating immensely with many initiatives in Europe, Asia, and around the world to help drive the innovation of 5G.


John Healy, General Manager of the Software Defined Networking Division within Intel’s Network Platforms Group stops by to chat about the current network transformation that is occurring and how open standards and software are integral to building the base of a new infrastructure that can keep pace with the insatiable demand being put on the network by end users. He illustrates how Intel is driving these open standards and open source solutions through involvement in initiatives like OpenStack*, OpenDaylight*, and the development of Intel Network Builders to create interoperability and ease of adoption for end-users. John also highlights the Intel® Open Network Platform and how it has been awarded the LightReading Leading Lights award for being the most Innovative network functions virtualization (NFV) product strategy technology.


Alan Ross, Senior Principal Engineer at Intel outlines how quickly the amount of data that enterprises deal with is scaling from millions to tens of billions and how gaining actionable insight from such unfathomable amounts of data is becoming increasingly challenging. He discusses how Intel is helping to develop analytics platform-as-a-service to better enable flexible adoption of new algorithms and applications that can expose data to end users allowing them to glean near real-time insights from such a constant flood of data. Alan also illustrates the incredible potential for advances in healthcare, disaster preparedness, and data security that can come from collecting and analyzing the growing expanse of big data.


Francisco Araya, Development and Operations Research & Development Lead at KIO Networks stops by to talk about how KIO Networks has delivered one of the first public clouds in Latin America based in OpenStack. He mentions that when Kio Networks first started implementing OpenStack it took about 2 months to complete an installation and now, thanks to the strong OpenStack ecosystem, it only takes about 3 hours for his team to complete an installation. Francisco emphasizes how the growing amount of OpenStack offerings and provider use cases greatly increases the ease and confidence when implementing OpenStack.


Rob Crooke, Senior VP and GM of NVM Solutions Group at Intel, discusses how Intel is breaking new ground in a type of memory technology that is going to help solve real computing problems and change the industry moving forward. This disruptive new technology is significantly denser and faster than DRAM and NAND technology. Rob outlines how this non-volatile memory will likely be utilized across many segments of the computing industry and have incredible effects on the speed, density, and cost of memory and storage moving into the future. To learn more, visit and search for ‘non-volatile memory’.


Das Kamhout, Principal Engineer and SDI Architect at Intel joins us to announce Intel’s launch of the Cloud for All initiative founded to accelerate cloud adoption and create tens of thousands of new clouds. He emphasizes how Intel is in a unique position to help align the industry towards delivery of easy to deploy cloud solutions based on standards based solutions optimized for enterprise capability. Das discusses that Cloud for All is a collaborative initiative involving many different companies including a new collaboration with Rackspace and ongoing work with companies including CoreOS, Docker, Mesosphere, Redapt, and Red Hat.


In this livecast from Big Telecom Sandra Rivera, Vice President and General Manager of the Network Platforms Group at Intel chats about the network transformation occurring within telecommunications and enterprise industries. She talks about how moving to an open industry standard solution base has created a shift in the industry paradigm from vertically integrated purpose built solutions supplied by one provider to a model where end users can choose best of breed modules from a number of different providers. This network transformation is providing a number of new business opportunities for many telecom equipment and networking equipment manufacturers. To learn more, follow Sandra on Twitter

Brian McCarson, Senior Principal Engineer and Senior IoT System Architect for the Internet of Things Group at Intel chats about the amazing innovations happening within the Internet of Things (IoT) arena and the core technology from Intel that enables IoT to achieve its’ full potential. He emphasizes how important security and accuracy of data is as the amount of IoT devices grows to potentially 50 Billion devices by 2020 and how Intel provides world class security software capabilities and hardware level security which are helping to protect from any risks associated with deploying IoT solutions. Brian also describes the Intel IoT Platform that is designed to promote security, scalability, and interoperability and creates a standard that allows customers to reduce time to market and increase trust when deploying IoT solutions. To learn more, visit


Bill Mannel, General Manager and Vice President at Hewlett-Packard, stops by to discuss the growing demand for high performance computing (HPC) solutions and the innovative use of HPC to manage big data. He highlights an alliance between Intel and HP that will accelerate HPC and big data solutions tailored to meet the latest needs and workloads of HPC customers, leading with customized vertical solutions. Experts from both companies will be working together to accelerate code modernization of customer workloads in verticals including life sciences, oil and gas, financial services, and more. To learn more, visit


In this livecast from the OpenStack Summit in Vancouver B.C. Das Kamhout, Principal Engineer and SDI Architect at Intel, stops by to chat about democratizing cloud computing and making some of the most complicated cloud solutions available to the masses. He outlines key changes occurring in cloud computing today like automation and hyperscale, highlighting how key technologies like OpenStack enable smaller cloud end users to operate in similar ways as some of the largest cloud using organizations. To learn more, follow Das on Twitter


In this livecast from the OpenStack Summit in Vancouver B.C. Mauri Whalen, VP & Director of Core System Development in the Open Source Technology Center at Intel, discusses how beneficial open source software innovation like Open Stack is and how the collaborative process helps produce the highest quality code and software possible. She also discusses the importance of initiatives like the Women of OpenStack and how supporting diversity within the open source community enables an overall better end product and ensures that all populations are represented in the creation of different solutions.


In this livecast from the OpenStack Summit in Vancouver B.C. Cathy Spence, Principal Engineer at Intel stops by to talk about Intel’s IT infrastructure move to the cloud and how their focus has evolved to center around agility and self service provisioning on demand services. She discusses how enterprise IT needs more applications that are designed for cloud to take advantage of private cloud implementations and more efficiently use public cloud solutions. Cathy also highlights Intel’s engagement with OpenStack, the Open Data Center Alliance, and other organizations that are driving best practices for cloud and advancing the industry as a whole. To learn more, follow Cathy on Twitter @cw_spence.


In this livecast from the OpenStack Summit in Vancouver B.C., Margaret Chiosi, Distinguished Network Architect at AT&T Labs, stops by to chat about how OpenStack is influencing the telecommunications industry and AT&T’s goals for transforming to a software-centric network. She also discusses the Open Platform for Network Functions Virtualization (OPNFV) project and the work being done to create a platform that is accepted industry wide to ensure consistency, innovation, and interoperability between different open source components.


In this livecast from the OpenStack Summit in Vancouver B.C. Ken Won, Director of Cloud Software Product Marketing at HP, chats about the HP Helion strategy that is helping customers shift from traditional to on demand infrastructure environments to drive down costs and deal with common compliance issues. He also describes how HP is heavily engaged in the OpenStack community to help drive the portability and standards for all different types of cloud environments making it easy for end users to shift resources and utilize the right infrastructure based on their application needs. To learn more, visit


In this livecast from the OpenStack Summit in Vancouver B.C. Curt Aubley, VP and CTO of Data Center Group at Intel stops by to talk about how OpenStack provides a foundational capability for cloud computing that allows customers to tailor and share common technologies to better address their specific needs. Curt discusses Intel® Cloud Integrity Technology and emphasizes how important it is to establish a foundation of trust to allow customers to easily move their workloads into the cloud. He highlights how standard approaches to security help facilitate flexibility and interoperability which in turn lowers levels of risk for everyone in the industry.


Jim Blakley, Visual Cloud Computing General Manager at Intel stops by to chat about the large growth in the use of cloud graphics and media processing applications and the increasing demands these applications are putting on the data center. He discusses the launch of the new Intel® Xeon® Processor E3-1200 v4 Product Family with integrated Intel® Iris™ Pro Graphics which provides up to 1.4x performance vs. the previous generation for video transcoding, as well as substantial improvement in overall media and graphics processing. These improvements not only benefit video quality and graphics rendering for end users, but also bring a better cost of ownership for data center managers by increasing density, throughput, and overall performance per rack. To learn more, visit and search for Iris™ Pro graphics, Intel® Xeon® Processor E3-1200 v4 Product Family, or Quick Sync Video.


Susan McNeice, Marketing Thought Leadership at Oracle Communications stops by to chat about how OpenStack* Enhanced Platform Awareness (EPA), which is built on open source solutions and supported by Intel, is helping the industry re-think strategies for managing a telecommunications cloud. She also discusses how EPA is addressing the gap between orchestrating virtualized network functions (VNF) activity, services into the network, and the conversation with the processor platform that exists today. To learn more, visit


Lynn Comp, Director of the Market Development Organization for the Network Products Group at Intel, stops by to chat about the advances that have been made in network virtualization and flexible orchestration enabling applications to be spun up within a virtual machine in minutes instead of months. She outlines how Intel is driving network transformation to a software defined infrastructure (SDI) by enabling network orchestrators to more rapidly apply security protocols to virtual applications. Lynn also highlights how enterprises are already employing virtualized routers, firewalls, and other aspects of network functions virtualization (NFV) and that NFV is already a mainstream trend with lots of reference material and applications available for enterprises to utilize. To learn more, follow Lynn on Twitter @comp_lynn or visit


In this archive of a livecast from Mobile World Congress Guy Shemesh, Senior Director of the CloudBand Business Unit at Alcatel-Lucent stops by to talk about how the CloudBand* platform enables service providers to accelerate adoption of Network Functions Virtualization (NFV). Guy emphasizes how important it is to embrace the open source community in such a rapidly changing industry in order to ensure the ability to adapt to different market trends and capture additional value for customers. To learn more, visit


Vineeth Ram, VP of Product Marketing at HP Servers chats about how HP is working to reimagine the server for the data driven organization and the wide breadth of solutions that HP has to offer. He outlines how HP is focused on redefining compute and how they are leveraging the infrastructure to deliver significant business outcomes and drive new insights from big data for their customers. To learn more, visit


Jim McHugh, VP of UCS & Data Center Solutions Marketing at Cisco stops by to talk about new possibilities that the launch of Intel® Xeon® processor E7 v3 family will bring to Cisco’s Unified Computing System (UCS) in the big data and analytics arena. He emphasizes how new insights driven by big-data can help businesses become intelligence-driven to create a perpetual and renewable competitive edge within their field. To learn more, visit


Ravi Pendekanti, Vice President of Server Solutions Marketing at Dell stops by to talk about the launch of Dell’s PowerEdge R930* four socket server that incorporates the new Intel® Xeon® processor E7 v3 family. Ravi discusses how the PowerEdge R930 will help enterprise customers migrate from RISC-based servers to more energy efficient servers like the R930 that will deliver greater levels of performance for demanding mission critical workloads and applications. To learn more, visit


Scott Hawkins, the Executive Director of Marketing for the Enterprise Business Group at Lenovo stops by to chat about how Lenovo is refreshing their high-end X6 portfolio to bring greater performance and security to its’ customers. He highlights how Lenovo’s X6 portfolio was truly enabled by the leadership collaboration between Intel and IBM and outlines how the launch of the Intel® Xeon® processor E7 v3 family incorporated into Lenovo solutions will bring end users the highest levels of processor and storage performance as well as memory capacity and resiliency. To learn more, visit


Lisa Spelman, General Manager of Marketing for the Datacenter Group at Intel discusses the launch of the new Intel® Xeon® processor E7 v3 family and how it is driving significant performance improvements for mission critical applications. She highlights how the incredible 12 terabyte memory capacity of the Intel® Xeon® processor E7 v3 is a game changer for in-memory computing that will enable enterprise to capture new business insights through real-time analytics and decision making.


Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

Read more >

Why Remote Direct Memory Access Is Now Essential

Over the years, people have talked about the potential of remote direct memory access (RDMA) to greatly accelerate application performance by bypassing the CPU and enabling direct access to memory. But there was a notable roadblock in this route to low-latency networking: slow storage media.


More specifically, with the slow speeds of widely used spinning disk and the relatively high cost of DRAM, there wasn’t a compelling reason for application developers to use RDMA for general purpose, distributed storage. Storage was basically a bottleneck in the I/O pipeline, and that bottleneck had the effect of negating the need for RDMA.


Now fast forward to 2015 and the arrival of a new generation of lightning-fast non-volatile memory (NVM) technologies, such as the upcoming Intel® Optane™ technology based on 3D XPoint™ memory. These new technologies are going to obliterate the storage bottlenecks of the past.


Consider these metrics from a fact sheet (PDF) from Intel and Micron, the joint developers of 3D XPoint technology:


  • HDD latency is measured in milliseconds, NAND latency is measured in microseconds, and 3D XPoint technology latency is measured in nanoseconds (one-billionth of a second)

  • 3D XPoint technology is up to 1,000x faster than NAND

  • In the time it takes an HDD to sprint the length of a basketball court, NAND could finish a marathon, and 3D XPoint technology could nearly circle the globe.


So how do we make use of these revolutionary storage innovations?


As a first step, we need to remove the bottlenecks in storage software that was written for the era of spinning disk. The assumptions about storage speeds and memory access built into legacy code no longer apply.


After that problem is fixed, we need to move on to the networking side of the equation. With the new generation of NVM technologies, storage performance has leapt ahead of networking performance—at least when using common networking technologies. This evolutionary change in storage creates the need for the speed of RDMA, which does network processing much more efficiently by enabling direct access to memory.


Removing the imbalance between NVM and RDMA isn’t an untested proposition. One big cloud service provider— Microsoft Azure—is already there. They prove the concept every day. They scale workloads out over distributed cores and exploit RDMA to offload cycles related to network processing. RDMA is one of their keys to achieving low latency and high message rates in bandwidth-hungry cloud applications.


If you are attending the SNIA Storage Developer Conference in Santa Clara this week, you will have the opportunity to explore these topics at various levels in presentations from Intel and Microsoft, among others. To learn more about RDMA, check out my pre-conference presentation where will explore RDMA and Four Trends in the Modern Data Center as well as presentations from Chet Douglas and Tom Talpey. I also recommend Bev Crair’s keynote on Next Generation Storage and Andy Rudoff’s talk exploring the Next Decade of NVM Programming.


Meanwhile, for a closer look at today’s new non-volatile memory technologies, including those based of 3D XPoint technology, visit

Read more >

Making Precision Health a Reality Requires a Personal, Data-Driven Approach

Healthcare reform is a hot topic, and for good reason. We have a healthcare system that lacks a personalized approach to solving the puzzle of today’s most invasive diseases. We have a system that is expensive, fragmented and largely inaccessible to our underserved communities. The question is, how do we fix it? eric_dishman.jpg


Make healthcare personal

We talk a lot about scaling patient engagement, but what does that mean and what are the benefits? It’s simple. An engaged and informed patient is more likely to own their health and proactively work with their doctor and various care teams. Two-way collaboration gives clinicians greater access to more actionable patient-generated data, making collaborative care possible while increasing the quality and accuracy of patient electronic health records (EHRs).


Precision requires diverse data

Combining patient, clinical, diagnostic and ‘omic data will give us a more diversified data set, changing the way we view health data and potential treatments.  But to analyze such diverse and large data sets will require new architectural approaches.  We will need to collect and store patient data in central and secure repositories when we can.  We will also need solutions that can accommodate large amounts of genomic data which isn’t efficient to move from the hospitals that generate and store it. Next-generation high performance computing (HPC) platforms that enable researchers from across our[DJW1]  country to conduct large scale collaborative analytics on millions of people’s data wherever it resides, in an open and secure trust model will be key. On September 17, the Precision Medicine Initiative Working Group formed under the National Institutes of Health (NIH) made a very bold announcement that could change the future of medicine.  A cohort of one million or more Americans will volunteer to have their various healthcare data incorporated into a precision medicine platform that will accelerate research across many areas of health and disease. Researchers will now have a huge pool of diverse data to help them discover and quantify factors that contribute to illness, and then test approaches that can preserve health and treat disease.


Securing the ability for participants and institutions to efficiently access this broader dataset will be crucial. With imaging, genomic, and consumer generated data beginning to scale, we should start with commitments to and validation of interoperability standards from the outset, so we do not recreate the problems seen in traditional EHR data.


What questions do you have?


Learn more:


US Senate Committee on Health Education, Labor and Pension’s hearing

National Institutes of Health one million research cohort to help millions of Americans who suffer from disease


[DJW1]Should this be world wide?

Read more >

Wearable Data From One Million Patients?

It’s great when two different parts of my life at Intel collide.


Last week I had the opportunity to chat with Andrew Lamkin, a colleague at Intel who has been working on a project to put the prototyping of new healthcare wearables in the hands of anyone with a 3-D printer and a desire to create a useful new device.


In this project, Andrew’s team published a 3-D model for a wristwatch bezel that can be fitted with an Intel Edison and one or more breakout boards with sensors. (See for example, The Edison’s computing power, combined with its ability to communicate via WiFi and Bluetooth, make it ideal for recording and transmitting a variety of signals from a user’s wrist. Data from accelerometer, temperature and a number of other sensors can be streamed from the device.


This is very thought-provoking for anyone interested in wearables and the data they produce…particularly if you recently attended the Working Group meeting for the President’s Precision Medicine Initiative, as I did on July 27 and 28. The Working Group is tasked with making recommendations to the President on what data should be recorded and made available for analysis in a national research cohort of one million patients to support the advancement of precision medicine. The topic of this working group session was “Mobile and Personal Technologies in Precision Medicine.”


The discussion covered a wide range of topics around the potential value of data from wearables, along with potential challenges and risks.  Interesting use cases that were exposed ranged from the measurement of environmental health factors to identification of stress and stress-relieving activities in college students. Of course, many challenges cropped up, and the question of whether a limited set of devices would be included in the initiative or whether the million patient cohort would be “BYOD” was left unresolved until the final report.

Dr. Francis Collins, the Director of the NIH, suggested that the NIH use some of its “prize-granting” funds to hold a bakeoff of wearable devices to decide what might be included in the design of the Million Patient Cohort.

After talking to Andrew about his Edison prototyping project, I became enamored with the idea of an army of device prototypers using his designs to prototype new and interesting wearables that might just end up as part of the Million Patient Cohort.


And as a data scientist, regardless of which devices are included, the thought of all the streaming data from one million patients gives me great optimism for the future of precision medicine in America.


What questions about wearables do you have?

Read more >

Scaling Software-Defined Storage in Retail

Recently I was afforded the opportunity to collaborate with the Kroger Co. on a case study regarding their usage of VMware and their Virtual SAN product.  Having spent many a day and night enjoying 4×4 subs and Krunchers Jalapeño (no more wimpy) chips during my days at Virginia Tech courtesy of the local Kroger supermarket, I was both nostalgic and intrigued.  Couple that with the fact that I am responsible for qualifying the Intel® Solid State Drives (SSDs) for use in Virtual SAN, it was really a no-brainer to participate.


One of the many eye openers I learned from this experience was just how large an operation the Kroger Co. runs.  They are the largest grocery retailer in the United States, with over 400,000 employees spanning over 3,000 locations.  The company has been around since 1883, and had 2014 sales in excess of $108,000,000. I spent roughly ten years of my career here at Intel in IT, and this was a great opportunity to gain insight, commiserate, and compare notes with another large company that surely has challenges I can relate to.

As it turns out, unsurprisingly, the Kroger Co. is heavily invested in virtualization, with 10’s of 1,000’s of virtual machines deployed and internal cloud customers numbering in the 1,000’s.  Their virtualized environment is powering critical lines of business, including manufacturing & distribution, pharmacies, and customer loyalty programs.

Managing the storage for this virtualized environment using a traditional storage architecture with centralized storage backing the compute clusters presented issues at this scale. To achieve desired performance targets, Kroger had to resort to all-flash fiber channel SAN implementations rather than hybrid (tiered) SAN implementations.  To be clear, these functioned, but were in direct opposition to the goal of reducing capital costs. This led Kroger to begin looking at Software-Defined Storage solutions as an alternative.  The tenets of their desired storage implementation revolved around: the ability to scale quickly, provide consistent QoS and performance on par with existing SAN-based solutions, and reduce cost.  No small order to be sure.

All-Flash Fiber Channel SAN performance, at about 1/5th the cost

Kroger evaluated multiple technologies, and eventually settled on Virtual SAN from VMware running in an all-flash configuration.  Here is where the other eye opening findings came to light.  Kroger found that their building block solution for Virtual SAN, which includes the Intel® SSD Data Center Family for NVMe, offered IOPS performance within 8% of all-flash fiber channel SAN at about 1/5th the expense, illustrated by the chart below.

IOPS, Cost, and Data Center Footprint Comparison


This same solution also offered latency characteristics within 3% of all-flash fiber channel SAN, while using approximately 1/10th the footprint in their data centers.

Latency, Cost, and Data Center Footprint Comparison


Key Takeaways

For the Kroger Co., the benefits of their Virtual SAN-based solution are clear:

  • Hyper-converged: Virtual SAN yields a roughly 10x reduction in footprint
  • Performance: minimal delta of 8% compared to all-flash fiber channel SAN
  • Cost: approximately 20% of the alternative all-flash fiber channel SAN solution


I wish we had solutions like this on the table during my days in IT- these are exciting times to witness.

Read more >

Students: Parallel Programming Contest – REALLY Great Prizes

We are running a contest for students with BIG prizes (read the exact rules for eligibility, etc.) through October 29, 2015.  The winners will have optimized the brain-simulation code we supply to registrants, and will be announced on November 15, … Read more >

The post Students: Parallel Programming Contest – REALLY Great Prizes appeared first on Intel Software and Services.

Read more >