Empowering Health: Are Avatars, Virtual Reality and Robots the future of Nursing?


I was delighted to be invited to speak at Microsoft’s Empowering Health event in Brussels, Belgium recently, which brought together some 200 thought-leaders from across the world to discuss health IT issues in a ‘Mobile First and Cloud First World’.


I was looking forward to hearing about how some of the more progressive countries in Europe were utilising technology to deliver more personal, productive and predictive health to its citizens so it was pleasing to hear examples from the Netherlands around patient portals and from Sweden where virtual care rooms are helping to deliver a more efficient healthcare system through patient self-diagnosis. From these very real examples of today to discussions around the future of machine learning and robotics, the narratives were underpinned by the absolute need for clinical staff to have input into the technology solution they would be asked to use as early as possible.


Data: One Size Does Not Fit All

Some great statistics from Tom Lawry, Director of Worldwide Health Analytics, Microsoft, generated a real buzz in the room. Tom started his presentation by stating that ‘we spend a lot of money ONCE people are sick, while most money is spent on small numbers of people who are VERY sick.’ Clearly there are a lot of areas where technology is helping to move the needle from cure to prevention while all-in-one-day genome sequencing to personalised medicine is something we are working towards here at Intel as we look ahead to 2020. I was interested to hear examples from across the world on how healthcare providers are dealing with increasingly large amounts of data. Within the European Union there are very different takes on what data is classed as secure and what is not. For providers and vendors, this requires a keen eye on the latest legislation, but it’s clear that it’s a case of one size does not necessarily fit all.


Digital Education of Nurses

The breakout nursing session brought together a dedicated group of nurses with a real interest in how technology can, and will, help make nursing even better. We kicked off by discussing what level of digital education nurses have today, and what they need to equip them for the future. The consensus was that more needs to be done in helping nurses be prepared for the technology they’ll be asked to use, in essence making technology a core part of the nursing curriculum from day one.


The move towards distributed care generated some fantastic thoughts on how technology can help nurses working in the community – read my recent blog for more thoughts on that. We all agreed that access to healthcare is changing, it has to if we are to meet the demands of an ageing population. For example, millennials don’t necessarily think that they need to see a medical practitioner in a hospital setting or a doctor’s surgery, they are happy to call a clinician on the phone or sit in a kiosk for a virtual consultation, the priority being quick and easy access.


Nurses Actively Championing Technology

I was particularly impressed by a new app showcased by Odense University Hospital called Talk2Care – in short, it enables patients in ICU to ‘talk’ to nurses using an icon-based dashboard on a mobile device. This new way for patients to communicate, who would in some cases only be able to nod or shake their head, has been invaluable not only for nurses but the patient’s family too. What really pleased me was that nurses were actively championing this technology, encouraging patients to utilise it to help nurses deliver a better care experience.


We closed with thoughts on how taking care into the community was being revolutionized by technology. We’ve got some great examples of the role Intel is playing in the advance towards more distributed care, from the use of Intel IoT Gateways to help the elderly live more independent lives at home through to the KU Wellness car which empowers nurses to take advanced care into the community using mobile devices.


Virtual Reality Nursing

After a short break we returned to the main auditorium where I was pleased to be on stage with nurses from across the world. The future of the workforce was discussed in some detail, particularly around how the nursing and the wider healthcare community will manage the anticipated future global shortage of nurses. Technology will go some way to alleviating this shortfall through improved workflows but I like to think in a more visionary way, perhaps we will see the use of avatars, virtual reality and (thinking of discussions earlier in the day) robots. What’s clear is that nursing is changing in response to the move to distributed care, we need to skill not only nurses but other caregivers too, i.e. families, to make better use of the technology that is available today and tomorrow.


Read more >

Advice to a Network Admin Seeking a Career in Cybersecurity

Insider6.jpgEven after nearly 25 years, I continue to be excited and passionate about security.  I enjoy discussing my experiences, opinions, and crazy ideas with the community.  I often respond to questions and comments on my blogs and in LinkedIn, as it is a great platform to share ideas and communicate with others in the industry.  Recently I had responded to a Network Admin seeking a career in cybersecurity.  With their permission, I thought I would share a bit of the discussion as it might be helpful to others.


Mr. Rosenquist – I have been in the Information Technology field as a network administrator for some 16 years and am looking to get into the Cyber Security field but the opportunity for someone that lacks experience in this specialized field is quite difficult. I too recognize the importance of education and believe it is critical to optimum performance in your field. What would your recommendation of suggested potential solutions be to break into this field?  Thank you for your time and expertise.


Glad to hear you want to join the ranks of cybersecurity professionals! The industry needs people like you. You have a number of things going for you. The market is hungry for talent and network administration is a great background for several areas of cybersecurity.


Depending on what you want to do, you can travel down several different paths. If you want to stay in the networking aspects, I would recommend either a certification from SANS (or other reputable training organization with recognizable certifications) or dive into becoming a certified expert for a particular firewall/gateway/VPN product (ex. PaloAlto, CISCO, Check Point, Intel/McAfee, etc.). The former will give you the necessary network security credentials to work on architecture, configuration, analysis, operations, policy generation, audit, and incident response. The latter are in very high demand and specialize in the deployment, configuration, operation, and maintenance of these specific products.  If you want to throw caution to the wind and explore areas outside of your networking experience, you can go for a university degree and/or security credentials. Both is better but may not be necessary.


I recommend you work backwards. Find job postings for your ‘dream job’ and see what the requirements are. Make inquiries about preferred background and experience. This should give you the insights to how best fill your academic foundation.  Hope this helps. – Matthew Rosenquist


The cybersecurity industry is in tremendous need of more people with greater diversity to fill the growing number of open positions.  Recent college graduates, new to the workforce, will play a role in satiating the need, but there remain significant opportunities across a wide range of roles.  Experienced professionals with a technical, investigative, audit, program management, military, and analysis background can pivot into the cybersecurity domain with reasonable effort.  This can be a great prospect for people who are seeking new challenges, very competitive compensation, and excellent growth paths.  The world needs people from a wide range of backgrounds, experiences, and skills to be a part of the next generation of cybersecurity professionals.



An open question to my peers; what advice would you give to workers in adjacent fields who are interested in the opportunities of cybersecurity?



Interested in more?  Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.

Read more >

Momentum Builds for the ‘Cloudification’ of Storage



Here’s a prediction for 2016: The year ahead will bring the increasing “cloudification” of enterprise storage. And so will the years that follow—because cloud storage models offer the best hope for the enterprise to deal with unbounded data growth in a cost-effective manner.


In the context of storage, cloudification refers to the disaggregation of applications from the underlying storage infrastructure. Storage arrays that previously operated as silos dedicated to particular applications are treated as a single pool of virtualized storage that can be allocated to any application, anywhere, at any time, all in a cloud-like manner. Basically, cloudification takes today’s storage silos and turns them on their sides.


There are many benefits to this new approach that pools storage resources. In lots of ways, those benefits are similar to the benefits delivered by pools of virtualized servers and virtualized networking resources. For starters, cloudification of storage enables greater IT agility and easier management, because storage resources can now be allocated and managed via a central console. This eliminates the need to coordinate the work of teams of people to configure storage systems in order to deploy or scale an application. What used to take days or weeks can now be done in minutes.


And then there are the all-important financial benefits. A cloud approach to storage can greatly increase the utilization of the underlying storage arrays. And then there are the all-important financial benefits. A cloud approach to storage can greatly increase the utilization of the storage infrastructure; deferring capital outlays and reducing operational costs.


This increased utilization becomes all the more important with ongoing data growth. The old model of continually adding storage arrays to keep pace with data growth and new data retention requirements is no longer sustainable. The costs are simply too high for all those new storage arrays and the data center floor space that they consume. We now have to do more to reclaim the value of the resources we already have in place.


Cloudification isn’t a new concept, of course. The giants of the cloud world—such as Google, Facebook, and Amazon Web Services—have taken this approach from their earliest days. It is one of their keys to delivering high-performance data services at a huge scale and a relatively low cost. What is new is the introduction of cloud storage in enterprise environments. As I noted in my blog on non-volatile memory technologies, today’s cloud service providers are, in effect, showing enterprises the path to more efficient data centers and increased IT agility.


Many vendors are stepping up to help enterprises make the move to on-premises cloud-style storage. Embodiments of the cloudification concept include Google’s GFS and its successor Colossus, Facebook’s HDFS, Microsoft’s Windows Azure Storage (WAS), Red Hat’s Ceph/Rados (and GlusterFS), Nutanix’s Distributed File System (NDFS), among many others.


The Technical View


At this point, I will walk through the architecture of a cloud storage environment, for the benefit of those who want the more technical view.


Regardless of the scale or vendor, most of the implementations share the same storage system architecture. That architecture has three main components: a name service, a two-tiered storage service, and a replicated log service. The architectural drill-down looks like this:


The “name service” is a directory of all the volume instances currently being managed. Volumes are logical data containers, each with a unique name—in other words a namespace of named-objects. A user of storage services attaches to their volume via a directory lookup that resolves the name to the actual data container.


This data container actually resides in a two-tier storage service. The frontend tier is optimized for memory. All requests submitted by end-users are handled by this tier: metadata lookups as well as servicing read requests out of cache and appending write operations to the log.


The backend tier of the storage service provides a device-based, stable store. The tier is composed of a set of device pools, each pool providing a different class of service. Simplistically, one can imagine this backend tier supporting two device pools. One pool provides high performance but has a relatively small amount of capacity. The second pool provides reduced performance but a huge amount of capacity.


Finally, it is important to tease out the frontend tier’s log facility as a distinct, 3rd component. This is because this facility key to being able to support performant write requests while satisfying data availability and durability requirements.


In the weeks ahead, I will take up additional aspects of the cloudification of storage. In the meantime, you can learn about things Intel is doing to enable this new approach to storage at

Read more >

Powerless or Power-Less?

DCD Magazine, contributed articledatacenter-abstract-graphic.jpg


While every facet of data center management is changing at a rapid pace, operating budgets rarely keep up. Data volume doubles every 18 months and applications every two years; in contrast, operating budgets take eight years to double (IDC Directions, 2014).


IT has always been asked to do more with less, but the dynamic nature of the data center has been accelerating in recent years. Smart devices, big data, virtualization, and the cloud continue to change service delivery models and elevate the importance of flexibility, elasticity, and scalability.


Every facet of data center management, as a result, has been complicated by an incredibly rapid rate of change. Thousands of devices move on and off intranets. Fluid pools of compute resources are automatically allocated. Does this ultra-dynamic environment make it impossible for IT and facilities management teams to identify under-utilized and over-stressed resources?


If so, energy consumption in the data center will continue to skyrocket. And data centers already consume 10 percent of all energy produced around the globe, according to recent Natural Resources Defense Council reports.)


Fortunately, IT is far from powerless even within these challenging data center conditions.


Discovering some secret weapons


Ironically, in today’s data centers consisting of software-defined resources, the secret weapon for curbing energy costs lies in the hardware. Rack and blade servers, switches, power distribution units, and many other data center devices provide a wealth of power and temperature information during operation. Data center scale and the diversity of the hardware make it too cumbersome to manually collect and apply this information, which has led to a growing ecosystem of energy management solution providers.


Data center managers, as a result, have many choices today. They can take advantage of a management console that integrates energy management, have an integrator add energy management middleware to an existing management console, or independently deploy an energy management middleware solution to gain the necessary capabilities.


Regardless of the deployment option, a holistic energy management solution allows IT and facilities teams to view, log, and analyze energy and temperature behaviors throughout the data center. Automatically collected and aggregated power and thermal data can drive graphical maps of each room in a data center, and data can be analyzed to identify trends and understand workloads and other variables.


Visibility and the ability to log energy information equips data center managers to answer basic questions about consumption, and make better decisions relating to data center planning and optimization efforts.


Best-in-class energy management solutions take optimization to a higher level by combining automated monitoring and logging with real-time control capabilities. For example, thresholds can be set to cap power for certain servers or racks at appropriate times or when conditions warrant. Servers that are idle for longer than a specified time can be put into power-conserving sleep modes. Power can be allocated based on business priorities, or to extend the life of back-up power during an outage. Server clock rates can even be adjusted dynamically to lower power consumption without negatively impacting service levels or application performance.


Energy-conscious data centers take advantage of these capabilities to meet a broad range of operating objectives including accurate capacity planning, operating cost reduction, extending the life of data center equipment, and compliance with “green” initiatives.


Common uses and proven results


Customer deployments highlight several common motivations, and provide insights in terms of the types and scale of results that can be achieved with a holistic energy management solution and associated best practices.


  • Power monitoring. Identifying and understanding peak periods of power use motivate many companies to introduce an energy management solution. The insights gained have allowed customers to reduce usage by more than 15 percent during peak hours, and to reduce monthly data center utility bills even as demands for power during peak periods goes up. Power monitoring is also being applied to accurately charge co-location and other service users.

  • Increasing rack densities. Floor space is another limiting factor for scaling up many data centers. Without real-time information, static provisioning has traditionally relied upon power supply ratings or derated levels based on lab measurements. Real-time power monitoring typically proves that the actual power draw comes in much lower. With the addition of monitoring and power capping, data centers can more aggressively provision racks and drive up densities by 60 to more than 80 percent within the same power envelope.

  • Identifying idle or under-used servers. “Ghost” servers draw as much as half of the power used during peak workloads. Energy management solutions have shown that 10 to 15 percent of servers fall into this category at any point in time, and help data center managers better consolidate and virtualize to avoid this wasted energy and space.

  • Early identification of potential failures. Besides monitoring and automatically generating alerts for dangerous thermal hot spots, power monitoring and controls can extend UPS uptime by up to 15 percent and prolong business continuity by up to 25 percent during power outages.

  • Advanced thermal control. Real-time thermal data collection can drive intuitive heat maps of the data center without adding expensive thermal sensors. Thermal maps can be used to dramatically improve oversight and fine-grained monitoring (from floor level to device level). The maps also improve capacity planning, and help avoid under- and over-cooling. With the improved visibility and threshold setting, data center managers can also confidently increase ambient operating temperatures. Every one-degree increase translates to 5 to 10 percent savings in cooling costs.

  • Balancing power and performance. Trading off raw processor speed for smarter processor design has allowed data centers to decrease power by 15 to 25 percent with little or no impact on performance.

Time to get serious about power


Bottom line, data center hardware still matters. The constantly evolving software approaches for mapping resources to applications and services calls for real-time, fine-grained monitoring of the hardware. Energy management solutions make it possible to introduce this monitoring, along with power and thermal knobs that put IT and facilities in control of energy resources that already account for the largest line item on the operating budget.


Software and middleware solutions that allow data center managers to keep their eyes on the hardware and the environmental conditions let automation move ahead full speed, safely, and affordably – without skyrocketing utility bills. Power-aware VM migration and job scheduling should be the standard practice in today’s power-hungry data centers.

Read more >

Intel’s Secret Cybersecurity Advantage

2016 Intel IT cybersec 2.jpgIntel Corporation has a secret advantage to protect itself from cyber threats; a world class Information Technology (IT) shop. The 2015-2016 Intel IT Annual Performance Report showcases the depth of security, operational efficiency, and innovation to deliver robust IT services for business value.


IT is typically a thankless job, relegated to the back office, data centers, network rooms, and call-center cubicles.  Although not a profit center, Intel IT has an important role in the daily business of a global technology innovation and manufacturing giant.  Cybersecurity is an integral part of IT’s job to keep Intel running. 


2016 Intel IT cybersec 3.jpg


I spent many years working within Intel IT and security operations.  I can say with confidence it is one of the best IT shops in the industry.  The proof is in the report.


Intel IT has produced an annual report for many years to highlight their efforts to enable growth of the business, improve productivity of employees, manage costs, and protect the confidentiality of data and availability of critical systems.


This year is no different.  Intel has a massive network and digital net-worth to protect.  Intel presents a big target to attackers, both internal and external, and must defend itself with industry best practices.  In many cases it is involved in the development and proof-of-concept testing with the Intel Security solutions teams to vet products and request needed capabilities in response to new threats. 


Here is a quick rundown of how Intel IT security stands guard.  Security supports over 100k employees in 72 countries, at 153 sites.  They are charged in protecting networks, clouds, servers, storage, PC’s, tablets, phones, and all the applications connecting them.  They must protect very sensitive silicon manufacturing, assembly, and test facilities where robots, chemicals, and people are making the magic of computer chips a reality.


Every day about 13 billion events get logged in the security tools.  These are critical to detect threats and attacks.  In the past year, 225 million pieces of malware were blocked from infecting Intel’s networks and computers.  Keeping systems patched and squashing vulnerabilities is a huge and constant job.  Over 12 million security events were addressed to close system vulnerabilities.  Intel IT security systems, people, and management are vigilant and focused in their role.  More importantly, they and the fellow employees they serve understand the value of their contribution, security policies, and continual awareness training.  Security is a tremendously big job, but when management, employees and security professionals work as a team, incredible results are possible.


The 2015-2016 Intel IT Annual Performance Report can be downloaded for free here:



Interested in more?  Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.

Read more >

New Meaningful Use 3 Requirement: Inclusion of Patient Generated Health Data

In December, Centers for Medicare & Medicaid Services (CMS) announced final rules of Meaningful Use 3 (MU3)—the third and final iteration of the Meaningful Use Program. The principal goal of this incentive program is to ensure that electronic health records are being used by providers in a way that improves quality of care (e.g., used for e-prescribing, or for submission of clinical quality measures). MU requires providers to meet these criteria in order to receive incentive payments and avoid downward reimbursement adjustments.


As part of MU3, eligible providers will be required to integrate Patient Generated Health Data (PGHD) with clinical data in the EHR for at least 5 percent of the patient population. PGHD includes any data that is generated outside of the clinical setting. Examples include data captured by a device such as a smart phone, or self-reported data (e.g., diet, functional status, emotion well-being) that is manually recorded by the patient. The patient both captures and transfers that data to the provider.


Inclusion of PGHD in this third and final phase of Meaning Use is exciting for several reasons  

First, this new rule has potential to incentivize providers to invest in the technology and infrastructure (e.g., data storage and security) that will support the integration and use of this data, which to date has not been systematically incorporated into routine patient care.


Second, this new rule coincides with the rapidly growing wearable device market and consumer use of these devices that allows patients to capture their own health data outside of the clinic or hospital setting. Integrating these data points with clinical data and allowing providers to use these data at the point of care will contribute to patient engagement, patient activation, and self-management.


Third, at the policy level, this is likely to drive interoperability and data security standards, which could have broader and positive implications for other types of healthcare data and analytics.


How should providers prepare?

This new ruling will go into effect in 2018, thus giving providers time to make changes to current EMRs and technology that will support the use of the transfer, use, and storage of this data.


At Intel, we are working to advance these goals through data security efforts, big data analytics, data storage capabilities, and wearable devices that promote and support PGHD.


One such initiative within Intel Health & Life Sciences involves Big Cloud Analytics and its COVALENCE Health Analytics Platform. The COVALENCE Health Analytics Platform is powered by Intel Xeon processor-based servers in the cloud, which ensures a secure, reliable, and scalable infrastructure. Big Cloud Analytics utilizes the Basis Peak watch, which provides 24×7 real-time heart rate monitoring, and supplies metrics for sleep patterns, steps taken, skin temperature, and perspiration. It collects readings on 50 biometric data points every 60 seconds and syncs the data security with the Basis Cloud.  This allows insurance providers, healthcare institutions, and employers to securely use wearable device data to engage patients with event-triggered personalized messaging.


Biometric sensor data gathered from the device is also transmitted to the cloud or on premise data storage and aggregated in the COVALENCE Health Analytics Platform. This platform transforms data into business intelligence and predictive analytics. It then generates wellness scores, bio-identity scores, and many others. Insights based on analysis of the data points and trends provide an early indication of potential health issues or lack of progress toward health goals.


PGHD as part of routine care: opportunities and challenges

While PGHD will substantially increase the number of data points that can inform healthcare and lead to new insights, we recognize that operationalizing the transmission and use of PGHD will not happen instantly, nor effortlessly. Many questions remain as to how this data will be most effectively used by providers and patients. For example, what is relevant data?  How should providers communicate this to patients so that the appropriate data can be collected and transferred? How much data will providers want and need to obtain in order to make this data useful for patient care? How often will providers want to see this data? How might this influx of data affect staff or clinic workflows? From a user experience perspective, how will this data be best displayed so that providers and patients alike can act upon it? Perhaps further research, particularly ethnographic research that takes into account both the clinician and patient perspective, is needed if we are to use this data in way that translates to better patient outcomes.

Read more >

Does Your IT Enterprise Have an Internet of Things (IoT) Strategy?

The Internet of Things (IOT) – the technology world is abuzz about it. Today, more objects are connected to the Internet than there are people in the world, and the number of IoT objects is expected to grow to between 25 and 50 billion by 2020, depending on which analyst you read. Security cameras, household and business appliances, lighting fixtures, climate control systems, water treatment plants, cars, traffic lights, fetal heart monitors, and power plant controls, just to name a few—the opportunities to collect and use data are seemingly endless.


What does all this mean for enterprise IT?


As the Industry Engagement Manager in Intel IT EMEA (Europe, Middle East, and Africa), I’m working with my colleagues to implement IoT solutions that align to our IT IoT strategy.


Why does IT need an IoT strategy you ask? Well, considering the growth of IoT solutions over time, business groups in many companies are looking for their own IoT point solution to solve their problems. They may not come to IT for the solution. Many may implement differing solutions, such as sensors, gateways, protocols, and platforms. This is not the most cost-effective or efficient approach for Intel for many reasons. Who will manage and support these point solutions over time? Who will verify that they are secure? Can the network handle the new data volume growth? If the solution “breaks,” will the business groups ask IT to fix them – if so, IT may inherit solutions that do not match the enterprise architecture needs and therefore may need to be redesigned, replaced, or both. These are all important questions, and there are many other reasons to have an IoT strategy, which I will discuss in a future blog. As IT professionals, we need to have a seat at the table when IoT solutions are being defined, to ensure the business gets it right the first time.


We recently published a white paper, “Integrating IoT Sensor Technology into the Enterprise,” describing the best practices we have developed relating to IoT. The paper shares the 14 best practices we use that enable us to successfully launch an IoT project. We hope that by sharing these best practices, we can help others to also successfully implement IoT solutions in their enterprise.


What we’ve learned is that once you’ve defined the process, IoT projects can be implemented in an efficient and streamlined fashion. Each step does require some effort to define its requirements, but once the steps are defined, they can be used in a repeatable manner. To summarize our defined best practices, here’s the flow:


Blog-Roadmap.pngPre-Explore the Technology and Concept

  • Best Practice #1: Build an IoT Team
  • Best Practice #2: Define the IoT System


Explore the Project Feasibility and Value

  • Best Practice #3: Determine the Business Value
  • Best Practice #4: Acquire Stakeholder Agreement and Funding


Plan and Scope the Project

  • Best Practice #5: Classify the Sensor Data
  • Best Practice #6: Design the Network Infrastructure and Choose IoT Devices
  • Best Practice #7: Review Environmental Conditions
  • Best Practice #8: Define Space and Electrical Power Needs


Develop and Deploy the IoT System

  • Best Practice #9: Secure the IoT Devices and Data
  • Best Practice #10: Align with Privacy and Corporate Governance Policies
  • Best Practice #11: Design for Scalability
  • Best Practice #12: Integrate and Manage the IoT Devices
  • Best Practice #13: Establish a Support Model
  • Best Practice #14: Plan the Resources


Using these best practices, we’ve done many IoT proofs of concept across our enterprise, using components of the Intel® IoT Platform and the Intel® Intelligent Systems Framework. Over time we are adding elements to the Intel IoT platform deployed in our environment. We are currently using many aspects of the Intel IoT platform, and so are other companies, and they are turning to Intel for advice on how best to implement their IoT solutions. For example, Siemens has adopted Intel’s IoT hardware and software stack for their smart parking initiative.


Our mission is to standardize on an end-to-end Intel IoT Platform-based solution that meets the wide and varied IoT needs of Intel, a global organization. Intel IT wants to transform the business by providing the IoT “plumbing” – that is, the platform building blocks – that enable Intel’s business groups to easily deploy IoT solutions when they need them.


Examples of IoT technology enabled by Intel include the Intel® Quark™ microcontroller D2000, Intel Gateways featuring Intel® Quark™ or Intel® Atom™ processors, and Intel® Security (McAfee) solutions integrated within our Wind River OS (Linux* or RTOS-VxWorks*). Wind River, which is a wholly owned subsidiary of Intel, also has an edge management solution for centrally managing edge devices, known as Helix* Device Cloud.


The IoT projects that we’ve done so far have shown great promise, and have resulted in significant ROI. Fully integrating the IoT into the enterprise isn’t an overnight project – it’s a continual journey and is a significant change in how business is done. But putting the building blocks in place now will make the journey shorter and easier, and will enable Intel to fully realize the business value of the IoT. You can learn more about our other IOT projects by reviewing our recently published 2015-2016 Intel IT Annual Performance Report.


I’d be interested to hear what other enterprise IT professionals are doing with IoT. Please share your thoughts and experiences by leaving a comment below – I look forward to the conversation!

Read more >

Criminals are Getting Excited for Tax Filing Season

tax-idtheft-logo.jpgCyber criminals are plotting to take advantage of tax season, by fraudulently impersonating consumers and scamming Americans.  For the citizens of the United States, tax season is upon us, where we diligently file our annual tax returns with the US Internal Revenue Service (IRS).  The problem is, in this digital age of electronically filing forms, the checks and balances to protect from fraud have not satisfactorily kept pace. 


Tax ID Fraud is a Terrible Problem

FTC ID theft 2015.jpgCyber criminals are taking advantage of weak identification validation controls to commit tax fraud. Tax identity theft happens when someone files a fake tax return using your personal information and submits information which results in a refund, to them not you.  They use your name and Social Security number with fictitious data, such as a different employer and address to get a tax refund from the IRS.  The IRS, not knowing better, accepts the information and is compelled to issue the refund in a very timely manner, else they must pay interest.  So the common practice is to accept the information at face-value and issue the refund to the submitter.  Thieves will have the funds placed on a pre-paid debit card or obtain a refund check they will quickly have cashed.  If things are later found to be incorrect, the IRS may move to resolve the problem, but the criminal in most cases is long gone.  The real citizen is then left with a rejection notice stating a filing has already taken place when they file their legitimate tax forms.  It can take a very long time to correct the matter, over a year to receive an earned refund, and many frustrating hours navigating through the crowded process. 


Attackers are committing a lot of fraud and both the IRS and Federal Trade Commission (FTC) are concerned as the problem swells in size every year.  Tax or wage ID theft complaints more than doubled from 109k in 2014 to over 221k in 2015.  In the US, ID theft is on the rise.  FTC received over 490 thousand consumer complaints, a 47% increase over 2014, with the biggest contributor to the rise being tax refund fraud.  Bureau of Justice estimates 17.6 million Americans were victims of identity theft in 2014.  That is about 7% of the US population aged 16 years or older.


Most of the IRS efforts to date have been around prevention.  For 2016, the IRS and FTC have rolled out consumer education and incident reporting sites.  Tax identity theft, which can include other forms of tax fraud, has been the most common form of identity theft reported to the Federal Trade Commission (FTC) for the past several years.  The IRS prides itself in quick turnaround for processing electronic filings and issuing a refund, targeting around 10 days.  Within that process is a set of filtering algorithms, which improves every year, to identify fraudulent tax submissions.  In 2015 the US Internal Revenue Service (IRS) flagged about 5 million suspicious returns, protecting $11 billion. 


Recently, the federal government targeted south Florida, one of the nation’s hot-spots for ID fraud, and issued a Geographic Targeting Order (GTO) for check cashing companies to take extra steps in verifying customer’s identification before cashing income tax returns.  For refund checks over $1000, customers must provide valid government-issued identification, the check cashing company must take a digital picture of the customer and obtain a clear thumbprint for the transaction to proceed.  Extreme measures to be sure, but one targeted specifically for 2 counties to stem the flow of tax fraud. 


Best practices to protect yourself from Tax ID Fraud:

  1. File your taxes as early as possible.  Sadly, it is a race.  The first submission, whether it be you or a fraudster, will likely be the return accepted from the IRS.  So get your tax return into the IRS as fast as possible.  File electronically if you don’t already to expedite the process
  2. Protect you Social Security Number (SSN).  Nowadays, many different organizations from healthcare to utilities may ask for your SSN.  Challenge them and verify how they will use and protect the information.  For every company who has your SSN, the chance of it being lost due to a data breach goes up.  Many companies will use the SSN as a unique identifier or as part of a verification process, but are open to use a different number if asked.  So ask!
  3. Check your credit report.  Unusual activity can be an indicator of trouble.  So get a copy and look for activity which you did not initiate.  By law, these reports are free at least once a year.  Go to the FTC site for more information or directly to to order your free annual report.
  4. Report ID theft quickly, if it occurs.  Visit, the federal government’s one-stop resource to help you report and recover from identity theft. You can report identity theft, get step-by-step advice, sample letters, and your FTC Identity Theft Affidavit. These resources will help you fix problems caused by the theft.  If your SSN has been compromised, contact the IRS ID Theft Protection Specialized Unit at 800-908-4490.
  5. Consider getting an Identity Protection PIN (form 14039).  An IP PIN is a six-digit number assigned to eligible taxpayers to help prevent the misuse of their SSN on fraudulent federal income tax returns. It is important to note you currently can’t opt out once you get an IP PIN. You must use the IP PIN to confirm your identity on all federal tax returns moving forward.


Be wary of IRS scams

This time of year IRS scams are rampant.  Sometimes they come in the form of a phone call, while others arrive via email.  Beware such engagements which state you owe money to the IRS and demand immediate payment.  The IRS only sends mail, not calls or email.  IRS will never: 1) call to demand immediate payment, nor will the agency call about taxes owed without first having mailed you a bill; 2) demand that you pay taxes without giving you the opportunity to question or appeal the amount they say you owe; 3) require you to use a specific payment method for your taxes, such as a prepaid debit card; 4) ask for credit or debit card numbers over the phone; or 5) threaten to bring in local police or other law-enforcement groups to have you arrested for not paying.  If you receive these IRS imposter scams, report them to the FTC at and to the Treasury Inspector General for Tax Administration (TIGTA) online or at 800-366-4484.



Be prepared and informed

Tax season is upon us and the criminals are busy with fraud and scams.  Be aware and move to protect your tax return.  Early efforts can save you from a long year of frustration.


More information about tax identity theft is available from the FTC and the IRS at:




Interested in more?  Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.  To see a full listing of blogs, videos, presentations and other thoughts, go to the collection of My Previous Posts

Read more >

SSD as a system memory? Yes, with ScaleMP’s technology.

I hope you’re excited as me and looking forward seeing first 3D XPoint™ based products in the market. Intel® Optane® SSDs have been already publically demonstrated at IDF’15 and Oracle Open World 2015. Not every performance detail is disclosed, keep in mind these were prototypes, but some key benchmarks (especially small random I/O at low queue depth) were shown. This brings the SSD close to memory than ever. But how close? Can we actually use it as an extension to a system memory? Short answer – yes, we can. There are different ways to do so, starting from a simple swapping/paging, application changes to use nmap()/dinmap()/SSDAlloc(), and some very special products like ScaleMP technologies discussed below. 


You may have heard of ScaleMP from their fame of SMP virtualization technology, which allows one to turn a cluster of x86 systems into a single system (SMP), where ScaleMP’s software, vSMP Foundation, runs below the OS layer and handles all the cache coherency and remote IO over the cluster fabric transparently to the OS.  That allows the OS and applications to utilize the entire cluster resources (compute, memory and IO) for a single application.

Well, ScaleMP has introduced new extensions – just like they enable the one called “Memory over Fabric” and use algorithms to optimize access patterns and yield magnificent performance, they also enable you to use NVM as if it was DRAM. As simple and transparent as it sounds! vSMP requires having NVMe based SSDs and supports only Intel® SSD Data Center Family for PCIe.



For the examples below, consider a dual-socket system in early 2016. Using commodity DRAM you could reach 768 GB of DRAM (24 x 32 DDR4 DIMMs). The memory subsystem alone would cost ~ $6,000 (32GB DIMMs retail online for about $250 these days).  With the ScaleMP we are targeting two key use-cases for Storage Class Memory (SCM) being used as main system memory:


1. Replacing most of the DRAM – using ScaleMP’s technology, you could reduce DRAM to 128GB, using 4 x 32GB DDR4 DIMMs only, and use 2 x Intel® SSD DC P3700 of 400GB each. The benefits?


a. CAPEX saving as the hybrid memory (DRAM+NVM) cost is lower by at least 33%.

b. An OPEX saving of 96 Watts (and similar savings in cooling)

(20 x 6W per DIMM vs. 2 x 12W per 400GB NVMe)

c. Performance in the 75% ~ 80% of DRAM performance range for demanding workloads such as multi-tenant DBMS running TPC-C.


2. Expanding on-top of DRAM – using ScaleMP’s technology, you could easily increase total system memory of the dual-socket server to ~ 8 TB


a. For reaching 8TB RAM using only DRAM, one would need to have the highest-end servers that could support 192 DIMMs and populate it with 128 DIMMs of 32GB, and 64 DIMMs of 64GB.  Such servers are power-hungry and require lots of space in the rack.

The alternative, using the dual-socket system described above, would require simply adding 4 NVMe devices of 2TB each – saving over 50% of the memory cost and rack space.

b. On the OPEX side, the difference is dazzling.  A high-end system would require 1,152W just for its 192 DIMMs, and the alternative would require ~ 75% less power.  I’ll skip describing the additional advantage of improved server density and datacenter standardization.

c. This setup allows the user to run 10x the number of memory demanding workloads on a single server, with the overall throughput being marginally affected.

d. This allows the user to run massive in-memory DBMS in the most economical manner.


By this point, I am sure you are wondering: “the $$$ savings look great, but what about performance?”.  Well, performance test results using Intel® SSD DC P3700 are fresh from the oven.  First, some details of the benchmark and configurations used:

The selected benchmark was an OLTP load. 5 instances of the MySQL DBMS (Percona distribution) concurrently running TPC-C benchmark, each instance using 25 warehouses with 128 connections – totaling 330GB of memory (all data loaded to main memory) + 160GB of buffer cache.

• Warmup – TPCC runs for a period of 6,000 seconds.

• Measurement – TPCC runs for a period of 7,200 seconds.


The hardware used was a dual-socket E5-v3 system, with one of two configurations:

• DRAM-only: 512 GB RAM (DDR4) – baseline server configuration (no ScaleMP software used for this setup)

• Hybrid DRAM-NVM: using same server, but keeping only 64GB RAM (DDR4), and adding 2 x Intel DC P3700 NVMe SSDs to provide the missing 448GB to the system memory.  ScaleMP’s software was used to make the system look the same as the above to the OS.


When running the Linux command ‘free’, the result was same on both configurations (see below).  Clearly the ScaleMP software did the job by hiding from the OS the fact that it is using hybrid DRAM-NVM memory subsystem.

[root@s2600wt-0 ~]# free –gh

              total        used        free      shared  buff/cache   available

Mem:           503G        3.4G        316G        9.6M        184G        499G



Now, for the benchmark results.  We summed the result of the 5 instances of TPC-C, which are measured in tpmC:

• For the “DRAM-only” configuration we got 217,757

• For the hybrid DRAM-NVM we got: 166,782


In other words, Intel® SSD DC P3700 used as memory replacement reached 75%~80% of DRAM performance!  (76.6% to be precise).  Keep in mind that number may vary from one application to another, but consider TPC-C representative as basic datacenter workload. It’s good reference point.

The pricing and performance info above is valid for early 2016, and based on only Intel® SSD Data Center Family for PCIe. Think about upcoming Intel® Optane® SSDs, based on 3D XPoint™ technology, will likely enable Intel and ScaleMP to push the performance further closer to DRAM performance.


If Intel and ScaleMP deliver on the promise of improved performance with Optane SSDs, they will arguably eliminate the border between Main Memory and Storage Class Memory (SCM).  It will allow SCM to be used for OS and application memory transparently, without any code changes.  While the Intel Optane SSDs will reduce the latency to storage, ScaleMP software already makes it byte addressable from application perspective and uses smart caching technology to reduce the average latency to values that are very close to overall DRAM performance. TCO stories look great even considering licensing for vSMP software which is not covered here at all and I should direct you to the ScaleMP’s web site for the details.

If your application is limited by the amount of DRAM in a box, now we can easily say that the sky is the limit for that application!



Andrey Kudryavtsev, Intel Corp.

Benzi Galili,

Read more >

In IoT we trust — or do we?


The expanding circle of distrust fueled by the Internet of Things (IoT)


I whip out my device to make a payment at the checkout counter of our local hardware store. And I pause. Should I let the point-of-sale system extract all my data? Can I trust this device? Do I have any control over who gets access to the data captured? We may get what we pay for — but, adversaries can get what we pay with! We don’t usually encounter adversaries face-to-face and our primary interface remains these point-of-sale systems, web clicks and other devices that are part of the Internet of Things. Thanks to recent hacks with the customer’s data at various retail chains, I am beginning to develop a sense of distrust in these very “things” and devices. Complementing the Circle of Trust I have developed over the years with close family and friends, IoT is injecting itself into a rapidly expanding Circle of Distrust.

IoT Trust.jpg


And I pay with cash instead — because You may get what you pay for but They can get what you pay with!


Daniel Miessler introduces the concepts of “personal daemons” and “universal daemonization” in this article on Future of the Internet and the Internet of ThingsThe word “daemon” in concept, takes me back many years when I used to foray into the guts of the UNIX kernel where daemons represented programs that run continuously as a background process – e.g. a printing daemon. Interestingly enough, the word “daemon” semantically means a spirit, a supernatural being, a demi-god.


Miessler does refer to the IoT having its own daemons that perpetually interact with each other.


However, bugs like Shellshock have the ability to take over control of the command script that can transform computing “daemons” into mythological “demons”. Happy Halloween! The fact that such bugs can continue to penetrate the landscape of devices only accentuates my circle of distrust.


And then, checkout devices that look and act like – of all creatures – like a snake! The last thing that comes to mind when I think of a serpentine creature is trust!


I finish my purchases and come home. My neighbor’s dog (Poodle you say?) comes running up to me furiously wagging its fluffy tail. Now, here is a being that I trust — a lifelong member of my Circle of Trust. But, do I trust the device around its neck that sends out signals to control the dog’s movements? I am not so sure. Yet another member of the Circle of Distrust.


How about you? How do you contrast your level of trust in humans versus machines? What is growing faster – your Circle of Trust or your Circle of Distrust?


I trust you will let me know. Or can I?


Twitter: @NadhanEG

Read more >

Hyper-Convergence “For the Rest of Us”

Software-defined storage (SDS) and hyper-convergence provide a flexible and efficient alternative to traditional “one server, one application” Storage Area Network (SAN) and Network Attached Storage (NAS) configurations. Hyper converged infrastructure is a growing trend. So what is hyper-convergence, and is it right for your business?


A scale-out, hyper-converged system allows the management of compute, storage and network resources as a single integrated system through a common tool set. Virtual servers, virtual networks and virtual storage are converged within a single piece of standard IA server, along with tools for management and security. Bottlenecks of traditional system (where compute, storage and networking are all separate physical resources) essentially don’t exist with hyper-converged systems. Everything is (virtually) “in the box”. Compute, storage and network capacity grow (“scale out”) by simply adding more hyper-converged systems as business needs demand. It’s a pay-as-you-grow approach to building capacity, and helps to better manage the cost of IT.


Hyper-converged scale-out systems differ from the older scale-up approach. In a scale-up system, the compute capacity is confined as storage is added, while in a scale-out system, new compute nodes can be added as the need for compute and storage arises. Scaling-up systems have often been cost prohibitive and often lacks the necessary random IO performance (IOPS) needed by virtualized workloads. The scale-out approach is a more efficient use of hardware resources, as it moves the data closer to the processor. When scale-out is combined with solid-state drive (SSD) storage it offers far lower latency, better throughput, and increased flexibility to grow with your business. Scale-out is commonly used for virtualized workloads, private cloud, data bases, and many other business applications.


So, with all of those benefits, why not spring for it and be done? Unfortunately, hyper-convergence has existed as a combination of disparate hardware and software, which comprise a “tool kit” to be assembled by technical staff. The skillset required is often beyond the capabilities of smaller businesses and the local IT and integration vendors who serve them. Many vendors offer complete, packaged “appliance” solutions … but the price tag is often high. So it’s been a bit of a “cool toys for big boys” story.


Now, StarWind Software has introduced a new hyper-converged appliance that brings the benefits of software defined scale-out storage and hyper-convergence to small business and remote-office/business-office installations. StarWind HCA, a turn-key hyper-converged appliance, delivers high performance compute and storage powered by Intel® SSD Data Center Family drives.


Intel® SSDs are designed with fast, consistent performance for smooth data center operation. The architecture of Intel’s SSDs ensure the entire read and write path and logical-block address (LBA) mapping has data protection and error correction. Many enterprise workloads depend not only on reliable data, but consistency in how quickly that data can accessed.  Consistent latency, quality of service, and bandwidth no matter what background activities are happening on the drive is the basis of the Intel Data Center Family.  Rigorous testing ensures a highly reliable SSD with consistent performance.


StarWind HCA brings enterprise class features to your business scale-out storage and hyper-convergence “for the rest of us”. This turn-key appliance – featuring Intel® SSD Data Center Family drives – eliminates scalability and performance bottlenecks and allows computing and storage capacity to grow with the business needs.

Read more >

Cybersecurity is Suffering Due to Human-Resource Challenges

Cyber HR Challenges.jpgThe cybersecurity industry is in a state of disrepair. Human resource problems are growing which put the efforts to secure technology at risk, due to insufficient staffing, skills, and diversity. 


The need for talent is skyrocketing, but there aren’t enough qualified workers to meet the current or future demands.  By 2017 prospective hiring organizations may have upwards of 2 million unfilled security related positions.  With supply low and demand high, prices rise quickly. Security roles benefit on average from a $12,000 pay premium over other computer related jobs.  Job growth in the digital security field outpaced IT positions by double and twelve times the rate of the overall job market. As a consequence, hiring companies are becoming very creative in their attempts to attract talent.  Industry headhunting practices are more aggressive and prolific to meet the demand.  Companies must not only deal with the challenges of hiring, they must also maneuver carefully to retain the professionals they currently have.


To add to the problem is a lack of diversity.  The industry needs greater inclusion of more diverse people who infuse new ideas, innovation, and practices.  Without an expanding range of perspectives, the industry remains encumbered by traditional thinking.  It becomes limited by the boundaries imposed from homogenous experiences, while the threats evolve and blossom in both size and depth of imagination. 


Lastly, graduates lack consistency and applicability of skills.  Cybersecurity is a rapidly changing field, requiring student’s growth and knowledge to keep pace with relevant methods, technology, and practices.  The education system is facing tremendous challenges to reliably prepare the next generation of cybersecurity professionals to be able to protect the digital world we want to live in.


To correct the problem, the industry needs to attract a broader pool of students, including women and underrepresented minorities, to sufficiently meet demand and infuse varied perspectives into the workforce.  Academia must align education practices to deliver higher levels of consistency and timeliness of skills in high demand for a rapidly evolving employment landscape.  Only then will we achieve a sustainable position to create the future generations of cybersecurity professionals necessary to protect technology.



I recently spoke at the ICT Educator Conference and highlighted the workforce challenges, the need for more diversity, and how Intel is working to improve the academic pipeline.  One of the highlights I discussed was Intel’s $300 million investment in diversity.  It is a great example of how a corporation can make a difference in the hiring, progression, and retention of a diverse workforce, contribute to building a sustainable flow of talent, and directly support other organizations doing the same.  Finally, I discussed how academia is shifting to build a formal degree program for cyber-science related fields.  This will ease the frustrations of hiring organizations by improving the consistency of skills supported by applicants’ degrees.


There is much work to be done, but efforts to fix the workforce and talent issues are necessary for the benefit of everyone.  Teamwork between educators, government, and the business community is the only way we will overcome the human resource challenges impeding cybersecurity.



Interested in more?  Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.

Read more >

In a Twist, School Principals Teach the Value of Data Analytics


Elementary and secondary school principals must solve a challenging optimization problem. Faced with a deluge of applicants for teaching positions, demanding teaching environments, and very little time to spend on the applicant review process, school principals need a search algorithm with ranking analytics to help them find the right candidates. This is a classic data science problem.


Elsewhere, I have described the ideal data scientist, a balanced mix of math and statistics sophistication, programming chops, and business savvy: A rare combination, indeed. To solve the teacher applicant ranking problem, does every school in the country need to hire one of these “unicorn” data scientists to create a system to automatically identify the best teacher candidates?


I propose that the answer is “No!” and Big Data startup TeacherMatch agrees with me.  It is not a good use of resources for every school to hire a data scientist to help analyze teacher applications with advanced analytics, natural language processing and machine learning, yet the need to make teacher candidate selection more effective and efficient is huge.  The solution is to leverage the work of an expert who has already done that analysis.


TeacherMatch is such an expert. Based on a huge amount of historical data, TeacherMatch has developed a score for ranking teacher applicants, the EPI score, based upon a prediction of how likely a candidate is to be successful in the environment that the principal is looking to fill. Suddenly, it is nearly instantaneous to identify the top handful of candidates out of a list of potentially hundreds of applicants.



I met Don Fraynd, CEO of TeacherMatch, last year when he joined a [data science] (  panel that I hosted I was impressed with his deep understanding of the challenges of hiring good teachers and with his practical approach to analytics. He has created a big data analytics solution that is sensible to incorporate into any organization that needs to hire teachers. See for yourself in this very interesting video about TeacherMatch.


Looking more broadly at the needs of all industry for more powerful analytics, the shortage of data scientists available to hire is a challenge. TeacherMatch’s model represents a real solution. In fact, I suspect that analytics-as-a-service will help drive a new era of advanced data analytics because it allows business users of analytics to leverage the output of a small number of data scientists who solve common problems across different organizations. In this regard, TeacherMatch represents the future of analytics.


Through the example of TeacherMatch, it appears that our principals and teachers are taking us to school on analytics.

Read more >

Cyber Threats are a Danger to Corporate Growth

Cash3.jpgExecutives are beginning to understand that cyber-based threats are a potentially significant impediment to business success. The challenges extend far beyond the annoyance of rising security budgets.  The recent PwC survey showed 61% of CEO’s believe cyber threats pose a danger to corporate growth.  If extrapolated across the business landscape, it will have sweeping ramifications to the future global economy.  The overall impact is in the trillions of dollars based upon the World Economic Forum and the Atlantic Council reports. 


We have been talking about this for years, but it is time to better understand the long-term systemic outlook of cybersecurity instead of just looking at the firefight-of-the-day tactical symptoms.  As the C-suite becomes more security savvy, the number of executives expressing concern will climb.  The problem of escalating cybersecurity expenses will continue to worsen. 


Security costs include more than just products, services, and headcount.  Incident costs, loss of reputation and customer goodwill add to the problem.  Compliance, auditability, insurance, and litigation must also be accounted for. 


One area which may represent the biggest overall impact has remained largely absent from discussions; the cost of creating secure products.  For suppliers, manufacturers, service providers, and product owners, the world is changing as products and services, which connect or control aspects of people’s lives, must now be built and operated with security and safety in mind.  Expenditures include shifts to support secure architecture, design, testing, manufacturing, compliance, and operational sustainability across the product lifespan.  These additional charges raise the costs to do business, delay product releases, inhibit innovation, and most importantly siphons assets from profit oriented activities. 


Say for example, a company must increase production costs by 20% to meet security needs.  Where will that money come from?  Taking it from marketing will reduce sales.  Lowering the IT budget will put at risk operations and may shelve projects to support the enablement of the organizations profit centers.  Cutting on product development can undermine competitiveness and lower customer satisfaction.  Reducing manufacturing budgets can delay delivery times and impact quality.  Few organizations have extra money sitting about.  Operating budgets are finite and having an additional set of expenditures, not generating new revenue, puts a strain on plans for success. 


The diversion of assets can limit the ability to seize market opportunities.  These “opportunity costs” can compound over time.  First to market, is a coveted position.  Losing that race to a competitor, because more security testing was required, can be devastating with long term consequences.  How difficult would it be to regain an advantageous position?  The tradeoffs apply to human resources as well.  What if the plan to hire 20 new marketing and sales staff were cut in half as the organization needed to hire security engineers and software developers to validate products instead?  How many deals would not close or new customers be serviced by competitors because of the lack of field representatives?  That money could have been reinvested for future gains and expansion.  How about the IT project to improve customer services or support a new product launch, which has to be pushed out a year because budget is being reallocated to meet cybersecurity regulatory compliance or pay for breach insurance.  The result of all these actions is the siphoning of competitive momentum which gives an advantage for competitiveness.  Opportunity costs are the sleeping giant which is rarely accounted for in the world of security.  


For just about every company, cybersecurity issues can upset the balance and strategic plans for business success.  Security costs can be unexpected and a severe impediment to growth, operations, customer satisfaction, and sales goals.  Executives are beginning to piece together the pieces of the cybersecurity picture.  Costs, risks of loss, impacts to products, timing into the market, missed opportunities, regulatory sanctions, satisfaction of customers, and goodwill of partners hang in the balance.  Cybersecurity is rapidly earning a place on the list of dangers to corporate growth.



Interested in more?  Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.

Read more >

A 2016 Prediction: Prescriptive Analytics Will Take Flight


In May 2015, I wrote the first in a series of blog posts exploring the journey to software-defined infrastructure. Each blog in the series dives down into different stages of a maturity model that leads from where we are today in the typical enterprise data center to where we will be tomorrow and in the years beyond.  During that time, I also delved into the workloads that will run on the SDI platform.


As I noted in last month’s post, traditional analytics leads first to reactive and then to predictive with the ultimate destination of prescriptive analytics.  This state is kind of the nirvana of today’s big data analytics and the topic we will take up today.


Prescriptive analytics extends beyond the predictive stage by defining the actions necessary to achieve outcomes and the inter-relationship of the outcomes to the effects of each decision. It incorporates both structured and unstructured data and uses a combination of advanced analytics techniques and other scientific disciplines to help organizations predict, prescribe, and adapt to changes that occur.  Essentially, we’ve moved from, “why did this happen,” to, “what will happen,” and we’re now moving to, “how do we make this happen,” as an analytics methodology.


Prescriptive analytics allows an organization to extract even more value and insight from big data—way above what we are getting today. This highest-level of analytics brings together varied data sources in real time and makes adjustments to the data and decisions on behalf of an organization. Prescriptive analytics is inherently real-time—it is always triggering these adjustments based on new information.



Let’s take few simple examples to make this story more tangible.


  • In the oil and gas industry, it can be used to enable natural gas price prediction and identify decision options—such as term locks and hedges against downside risk—based on an analysis of variables like supply, demand, weather, pipeline transmission, and gas production. It might also help decide when and where to harvest the energy, perhaps even spinning up and shutting down sources based on a variety of environmental and market conditions.


  • In healthcare, it can increase the effectiveness of clinical care for providers and enhance patient satisfaction based on various factors across stakeholders as a function of healthcare business process changes.  It could predict patient outcomes and help alleviate issues before they would normally even be recognized by medical professionals.


  • In the travel industry, it can be used to sort through factors like demand curves and purchase timings to set seat prices that will optimize profits without deterring sales.  Weather and market conditions could better shape pricing and fill unused seats and rooms while relieving pressure in peak seasons.


  • In the shipping industry, it can be used to analyze data streams from diverse sources to enable better routing decisions without the involvement of people. In practice, this could be as simple as a system that automatically reroutes packages from air to ground shipment when weather data indicates that severe storms are likely to close airports on the usual air route.


I could go on and on with the examples, because every industry can capitalize on prescriptive analytics. The big takeaway here is that prescriptive analytics has the potential to turn enormous amounts of data into enormous business value—and do it all in real time.


With the impending rise of prescriptive analytics, we are entering the era in which machine learning, coupled with automation and advanced analytics, will allow computers to capture new insights from massive amounts of data in diverse datasets and use that data to make informed decisions on our behalf on an ongoing basis.


At Intel, we are quite excited about the potential of prescriptive analytics. That’s one of the reasons why we are a big backer of the open source Trusted Analytics Platform (TAP) initiative, which is designed to accelerate the creation of cloud-native applications driven by big data analytics. TAP is an extensible open source platform designed to allow data scientists and application developers to deploy solutions without having to worry about infrastructure procurement or platform setup, making analytics and the adoption of machine learning easier. To learn about TAP, visit

Read more >

Blueprint: Tips for Avoiding a Data Center Blizzard

This article originally appeared on Converge Digest



We’re in the depth of winter and, yes, the snow can be delightful… until you have to move your car or walk a half block on icy streets. Inside the datacenter, the IT Wonderland might lack snowflakes but everyday activities are even more challenging year round. Instead of snowdrifts and ice, tech teams are faced with mountains of data.



So what are the datacenter equivalents of snowplows, shovels, and hard physical labor? The right management tools and strategies are essential for clearing data paths and allowing information to move freely and without disruption.


This winter, Intel gives a shout-out to the unsung datacenter heroes, and offers some advice about how to effectively avoid being buried under an avalanche of data. The latest tools and datacenter management methodologies can help technology teams overcome the hazardous conditions that might otherwise freeze up business processes.


Tip #1: Take Inventory


Just as the winter holiday season puts a strain on family budgets, the current economic conditions continue to put budget pressures on the datacenter. Expectations, however, remain high. Management expects to see costs go down while users want service improvements. IT and datacenter managers are being asked to do more with less.


The budget pressures make it important to fully assess and utilize the in-place datacenter management resources. IT can start with the foundational server and PDU hardware in the datacenter. Modern equipment vendors build in features that facilitate very cost-effective monitoring and management. For example, servers can be polled to gather real-time temperature and power consumption readings.


Middleware solutions are available to take care of collecting, aggregating, displaying, and logging this information, and when combined with a management dashboard can give datacenter managers insights into the energy and temperature patterns under various workloads.


Since the energy and temperature data is already available at the hardware level, introducing the right tools to leverage the information is a practical step that can pay for itself in the form of energy savings and the ability to spot problems such as temperature spikes so that proactive steps can be taken before equipment is damaged or services are interrupted.


Tip #2: Replace Worn-Out Equipment


While a snow shovel can last for years, datacenter resources are continually being enhanced, changed, and updated. IT needs tools that can allow them to keep up with requests and very efficiently deploy and configure software at a rapid pace.


Virtualization and cloud architectures, which evolved in response to the highly dynamic nature of the datacenter, have recently been applied to some of the most vital datacenter management tools. Traditional hardware keyboard, video, and mouse (KVM) solutions for remotely troubleshooting and supporting desktop systems are being replaced with all-software and virtualized KVM platforms. This means that datacenter managers can quickly resolve update issues and easily monitor software status across a large, dynamic infrastructure without having to continually manage and update KVM hardware.


Tip #3: Plan Ahead


It might not snow everyday, even in Alaska or Antarctica. In the datacenter, however, data grows everyday. A study by IDC, in fact, found that data is expected to double in size every two years, culminating in 44 zettabytes by 2020. An effective datacenter plan depends on accurate projections of data growth and the required server expansion for supporting that growth.


The same tools that were previously mentioned for monitoring and analyzing energy and temperature patterns in the datacenter can help IT and datacenter architects better understand workload trends. Besides providing insights about growth trends, the tools promote a holistic approach for lowering the overall power budget for the datacenter and enable datacenter teams to operate within defined energy budget limits. Since many large datacenters already operate near the limits of the local utility companies, energy management has become mission critical for any fast-growing datacenter.


Tip #4: Stay Cool


Holiday shopping can be a budget buster, and the credit card bills can be quite a shock in January. In the datacenter, rising energy costs and green initiatives similarly strain energy budgets. Seasonal demands, which peak in both summer and the depths of winter, can mean more short-term outages and big storms that can force operations over to a disaster recovery site.


With the right energy management tools, datacenter and facilities teams can come together to maximize the overall energy efficiency for the datacenter and the environmental conditions solutions (humidity control, cooling, etc.). For example, holistic energy management solutions can identify ghost servers, those systems that are idle and yet still consuming power. Hot spots can be located and workloads shifted such that less cooling is required and equipment life extended. The average datacenter experiences between 15 to 20 percent savings on overall energy costs with the introduction of an energy management solution.


Tip #5: Reading the Signs of the Times


During a blizzard, the local authorities direct the snowplows, police, and rescue teams to keep everyone safe. Signs and flashing lights remind everyone of the rules. In the datacenter, the walls may not be plastered with the rules, but government regulations and compliance guidelines are woven into the vital day-to-day business processes.


Based on historical trends, regulations will continue to increase and datacenter managers should not expect any decrease in terms of required compliance-related efforts. Public awareness about energy resources and the related environment impact surrounding energy exploration and production also encourage regulators.


Fortunately, the energy management tools and approaches that help improve efficiencies and lower costs also enable overall visibility and historical logging that supports audits and other compliance-related activities.

When “politically correct” behavior and cost savings go hand in hand, momentum builds quickly. This effect is both driving demand for and promoting great advances in energy management technology, which bodes well for datacenter managers since positive results always depend on having the right tools. And when it comes to IT Wonderlands, energy management can be the equivalent of the whole toolshed.

Read more >

Embracing A More Connected Vision Of Healthcare

One of the most promising areas of innovation and transformation in healthcare today is the move to distributed care, achieved through the creation of patient-centered networks of intelligent, connected devices that span across the home, workplace, community and the mobile spaces in between. Data capture and analysis, and communication between the patient and their care team can all be enhanced and harnessed to deliver more effective healthcare to more people at lower cost.


Connected Care, Everywhere

In the home, this will be driven by new types of consumer medical devices and smart-home connectivity and features. In the workplace and the community, new mobile devices and services including kiosks will be available. And for persistent real-time data and connectivity, new purpose-built and general purpose devices will fill in critical gaps.


Community Care Impact

In the home, sensors are transforming the way we care for the elderly, helping them stay more independent and spend longer at home, thus improving general well-being and reducing costs to the provider. Mimocare’s sensor solution is a great example of just how the Internet of Things can help us move the focus towards prevention rather than cure.


For community nurses this kind of distributed care is a win-win, they’re alerted (remotely) to patients showing abnormal signs earlier which enables a more speedy intervention and appropriate care is delivered more quickly, while also reducing the need for unnecessary monitoring visits too.


Patient-Centered Connectivity

I’d highly recommend reading a recent blog on the use of the Intel® RealSense™ 3D Camera by GPC too, which can help clinicians in a hospital setting make better-informed decisions in the area of wound care management. It’s an exciting development as wound care management accounts for a high-spend by most care providers, for example,  in the UK the NHS spends some £3 billion per year in this area.


RealSense™ is available across a range of mobile devices today so I see a future where patients are able to play a greater role in their wound care management in the home setting by recording the healing progress of wounds using the 3D camera and sharing the results with clinicians. This is undoubtedly more convenient for patients and more efficient for clinicians and providers.


Balancing the Demands of Modern Healthcare

These patient-centered networks of intelligent, connected devices generate significant volumes of data which can be analyzed by healthcare providers to help balance the demands of an ageing population with increased pressure on costs.  Underlying this shift to distributed care is patient preference to stay at, and be clinically managed, at home.  The tools are available today, so let’s embrace a more connected vision of healthcare where we deliver even better care to patients.


Read more >

How End-To-End Network Transformation Fuels the Digital Service Economy

To see the challenge facing the network infrastructure industry, I have to look no farther than the Apple Watch I wear on my wrist.


That new device is a symbol of the change that is challenging the telecommunications industry. This wearable technology is an example of the leading edge of the next phase of the digital service economy, where information technology becomes the basis of innovation, services and new business models.


I had the opportunity to share a view on the end-to-end network transformation needed to support the digital service economy recently with an audience of communications and cloud service providers during my keynote speech at the Big Telecom Event.


These service providers are seeking to transform their network infrastructure to meet customer demand for information that can help grow their businesses, enhance productivity and enrich their day-to-day lives.  Compelling new services are being innovated at cloud pace, and the underlying network infrastructure must be agile, scalable, and dynamic to support these new services.


The operator’s challenge is that the current network architecture is anchored in purpose-built, fixed function equipment that is not able to be utilized for anything other than the function for which it was originally designed.  The dynamic nature of the telecommunications industry means that the infrastructure must be more responsive to changing market needs. The challenge of continuing to build out network capacity to meet customer requirements in a way that is more flexible and cost-effective is what is driving the commitment by service providers and the industry to transform these networks to a different architectural paradigm anchored in innovation from the data center industry.


Network operators have worked with Intel to find ways to leverage server, cloud, and virtualization technologies to build networks that cost less to deploy, giving consumers and business users a great experience, while easing and lowering their cost of deployment and operation.


Transformation starts with reimagining the network


This transformation starts with reimagining what the network can do and how it can be redesigned for new devices and applications, even including those that have not yet been invented. Intel is working with the industry to reimagine the network using Network Functions Virtualization (NFV) and Software Defined Networking (SDN).


For example, the evolution of the wireless access network from macro basestations to a heterogeneous network or “HetNet”, using a mix of macro cell and small cell base-stations, and the addition of mobile edge computing (MEC) will dramatically improve network efficiency by providing more efficient use of spectrum and new radio-aware service capabilities.  This transformation will intelligently couple mobile devices to the access network for greater innovation and improved ability to scale capacity and improve coverage.


In wireline access, virtual customer premises equipment moves service provisioning intelligence from the home or business to the provider edge to accelerate delivery of new services and to optimize operating expenses. And NFV and SDN are also being deployed in the wireless core and in cloud and enterprise data center networks.


This network transformation also makes possible new Internet of Things (IoT) services and revenue streams. As virtualized compute capabilities are added to every network node, operators have the opportunity to add sensing points throughout the network and tiered analytics to dynamically meet the needs of any IoT application.


One example of IoT innovation is safety cameras in “smart city” applications. With IoT, cities can deploy surveillance video cameras to collect video and process it at the edge to detect patterns that would indicate a security issue. When an issue occurs, the edge node can signal the camera to switch to high-resolution mode, flag an alert and divert the video stream to a central command center in the cloud. With smart cities, safety personnel efficiency and citizen safety are improved, all enabled by an efficient underlying network infrastructure.


NFV and SDN deployment has begun in earnest, but broad-scale deployment will require even more innovation: standardized, commercial-grade solutions must be available; next-generation networks must be architected; and business processes must be transformed to consume this new paradigm. Intel is investing now to lead this transformation and is driving a four-pronged strategy anchored in technology leadership: support of industry consortia, delivery of open reference designs, collaboration on trials and deployments, and building an industry ecosystem.


The foundation of this strategy is Intel’s role as a technology innovator. Intel’s continued investment and development in manufacturing leadership, processor architecture, Ethernet controllers and switches, and optimized open source software provide a foundation for our network transformation strategy.


Open standards are a critical to robust solutions, and Intel is engaged with all of the key industry consortia in this industry, including the European Telecommunications Standards Institute (ETSI), Open vSwitch, Open Daylight, OpenStack, and others. Most recently, we dedicated significant engineering and lab investments to the Open Platform for NFV’s (OPNFV) release of OPNFV Arno, the first carrier-grade, open source NFV platform.


The next step for these open source solutions is to be integrated with operating systems and other software into open reference software to provide an on-ramp for developers into NFV and SDN. That’s what Intel is doing with our Open Network Platform (ONP); a reference architecture that enables software developers to lower their development cost and shorten their time to market.  The innovations in ONP form the basis of many of our contributions back to the open source community. In the future, ONP will be based on OPNFV releases, enhanced by additional optimizations and proofs-of-concept in which we continue to invest.


We also are working to bring real-world solutions to market and are active in collaborating on trials and deployments and deeply investing in building an ecosystem that brings companies together to create interoperable solutions.


As just one example, my team is working with Cisco Systems on a service chaining proof of concept that demonstrates how Intel Ethernet 40GbE and 100GbE controllers, working with a Cisco UCS network, can provide service chaining using network service header (NSH).  This is one of dozens of PoCs that Intel has participated in in just this year, which collectively demonstrate the early momentum of NFV and SDN and its potential to transform service delivery.


A lot of our involvement in PoCs and trials comes from working with our ecosystem partners in the Intel Network Builders. I was very pleased to have had the opportunity to share the stage with Martin Bäckström and announce that Ericsson has joined Network Builders. Ericsson is an industry leader and innovator, and their presence in Network Builders demonstrates a commitment to a shared vision of end-to-end network transformation.


The companies in this ecosystem are passionate software and hardware vendors, and also end users, that work together to develop new solutions. There are more than 150 Network Builder members taking advantage of this program and driving forward with a shared vision to accelerate the availability of commercial grade solutions.


NFV and SDN are deploying now – but that is just the start of the end-to-end network transformation. There is still a great deal of technology and business innovation required to drive NFV and SDN to scale, and Intel will continue its commitment to drive this transformation.

I invited the BTE audience – and I invite you – to join us in this collaboration to create tomorrow’s user experiences and to lay the foundation for the next phase of the digital services economy.

Read more >

Save Lives, Prevent Equipment Failures, and Gain Insights at the Edge

Internet of Things (IoT) technologies from Intel and SAP enable innovative solutions far from the data center


Can a supervisor on an oil rig know immediately when critical equipment fails? Can a retail store manager provide customers with an up-to-date, customized ex¬¬perience without waiting for back-end analysis from the parent company’s data center? A few years ago, the answer would have been a clear “no.” But today, real-time, actionable data at the edge is a reality.


Innovative technologies from Intel and SAP can enable automated responses and provide critical insights at remote locations. The unique joint solutions enable companies to dramatically improve worker safety, equipment reliability, and customer engagement, all without an infrastructure overhaul. For example, technicians on a remote, deep-sea oil rig can be equipped with sensors that detect each technician’s location, heart rate, and exposure to harmful gasses. Additionally, sensors powered by Intel Quark SoCs can be placed on equipment throughout the oil rig to monitor for leaks or fires. The collective data from these sensors is fed to an Intel IoT Gateway and processed to provide data visualization and a browser interface that is easily accessible from any device.


From any location on the rig with Wi-Fi access, supervisors can monitor worker health and safety data from an app running on a tablet device. In addition, automated alerts and alarms can signal when an employee is in danger or a critical malfunction has occurred. All of this processing can happen in real time, on-site, without depending on a reliable wide-area network (WAN) connection to a back-end server that might be hundreds or thousands of miles away.


When the WAN connection is available, the SAP Remote Data Sync service synchronizes data with SAP HANA running in the cloud or in the data center. This synchronization provides cloud-based reporting and back-end SAP integration for long-range analysis




With IoT sensors, Intel IoT Gateway, and SAP software, businesses can improve safety and gain real-time insights right at the edge.


To learn more about the joint Intel and SAP solution at the edge, read the solution brief Business Intelligence at the Edge.

Read more >