Recent Blog Posts

In Their Own Words: Intel Intern Swapna Manohar Shares Her Story

Swapna Manohar is a Graduate Technical Intern with Intel’s Platform Engineering Group (PEG). She is currently pursuing her Master’s degree in VLSI Design and Embedded Systems from PES Institute of Technology, Bangalore. “If we spend enough time dreaming, then the dream might eventually become … Read more >

The post In Their Own Words: Intel Intern Swapna Manohar Shares Her Story appeared first on Jobs@Intel Blog.

Read more >

Intel and Chromat Reveal Technology’s Possibilities for Fashion on the Runway

Tonight at architectural sportswear designer Chromat’s Spring/Summer 2016 runway show at MADE Fashion Week, Intel and Chromat showed two responsive garments that transform shape based on the wearer’s body temperature, adrenaline or stress levels. The intelligence behind the movement of … Read more >

The post Intel and Chromat Reveal Technology’s Possibilities for Fashion on the Runway appeared first on Technology@Intel.

Read more >

Power Raptor, Painted, and caterPILLar: IoT Concepts from the Summer Innovation Program

Our team at Intel gained a great insight this summer: If you want to introduce some bold new ideas into your organization, bring in some high schoolers and arm them with design thinking. In a previous blog post I wrote … Read more >

The post Power Raptor, Painted, and caterPILLar: IoT Concepts from the Summer Innovation Program appeared first on Intel Software and Services.

Read more >

Genomic Sequencing is Coming to a Clinical Workflow Near You. Are You Ready?

Genome sequencing has moved from bench research into clinical care—and it’s producing medical miracles. Now, it’s time to make genome sequencing an everyday part of our clinical workflows. That day is closer than you might think.


Those were the messages James Lowey shared at HIMSS 2015 in Chicago. As VP of technology at TGen—the nonprofit Translational Genomics Research Institute—James is at the forefront of efforts to bring next-generation genomic sequencing into the clinical mainstream and use it to transform the way we diagnose, treat, and prevent illness.


At the HIMSS session, James described the broad range of areas of clinical interest for genomic data. He also discussed the compute infrastructure necessary to provide cost-effective performance and scalability for large-scale production processing of genomic data.


Recently, our healthcare team interviewed James to learn more about his TGen’s strategy. In this case study, James tells us where he thinks we’re heading—and how fast we’re getting there. He also highlights social and policy issues that must be addressed, and points out the need for ongoing research to establish evidence-based clinical protocols.




I hope you’ll read the TGen case study and join the conversation. Is your organization incorporating genomic analysis into clinical workflows? If so, can you share any advice or best practices? If not, how close are you? What are your next steps? What’s holding you back? Let me know in the comments section. Or become a member of the Intel Health and Life Sciences Community.


Learn more about Intel® Health & Life Sciences.


Download the session handout from HIMSS 2015, Using Genomic Data to Make a Difference in Clinical Care.


Stay in touch: @IntelHealth, @hankinjoan

Read more >

Infusing Fashion with Smart Technology at New York Fashion Week

As technology becomes more closely  intertwined  with our day-to-day lives, apparel, accessories and even our  shopping experience, it’s exciting to watch the next iteration of the marriage between fashion and technology come to life at this year’s New York Fashion … Read more >

The post Infusing Fashion with Smart Technology at New York Fashion Week appeared first on Technology@Intel.

Read more >

New Intel Visual Compute Accelerator Makes Its Debut

For service providers, the rapid momentum of video streaming is both a plus and a minus.  On the plus side, millions of consumers are now looking to service providers to deliver content they used to access through other channels. That’s all good news for the business model and the bottom line.


On the minus side, service providers now have to meet the growing demands of bandwidth-hungry video streams, including new 4K media streaming formats. As I mentioned in a recent blog post, Video Processing Doesn’t Have To Kill the Data Center, today’s 4K streams come with a mind-boggling 8 million pixels per frame. And if you think today’s video workloads are bad, just stayed turned for more. Within five years, video will consume 80 percent of the world’s Internet bandwidth.


While meeting today’s growing bandwidth demands, service providers simultaneously have to deal with an ever-larger range of end-user devices with wide variances in their bit rates and bandwidth requirements. When customers order up videos, service providers have to be poised to deliver the goods in many different ways, which forces them to store multiple copies of content—driving up storage costs.


At Intel, we are working to help service providers solve the challenges of the minus side of this equation so they can gain greater benefits from the plus side. To that end, we are rolling out a new processing solution that promises to accelerate video transcoding workloads while helping service providers contain their total cost of ownership.


This solution, announced today at the IBC 2015 conference in Amsterdam, is called the Intel® Visual Compute Accelerator. It’s an Intel® Xeon® processor E3 based media processing PCI Express* (PCIe*) add-in card that brings media and graphics capabilities into Intel® Xeon® processor E5 based servers. We’re talking about 4K Ultra High Definition (UHD) media processing capabilities.


A few specifics: The card contains three Intel Xeon processor E3 v4 CPUs, which each contain the Intel® Iris™ Pro graphics P6300 GPU. Placing these CPUs on a Gen3 x16 PCIe card provides high throughput and low latency when moving data to and from the card.




The Intel Visual Compute Accelerator is designed for cloud and communications service providers who are implementing High Efficiency Video Coding (HEVC), which is expected to be needed for 4K/UHD videos, and Advanced Video Coding (AVC) media processing solutions, whether in the cloud or in their networks.


We expect that the Intel Visual Compute Accelerator will provide customers with excellent TCO when looking at cost per watts per transcode. Having both a CPU and a GPU on the same chip (as compared to just a GPU) enables ISVs to build solutions that improve software quality while accelerating high-end media transcoding workloads.


If you happen to be at IBC 2015 this week, you can get a firsthand look at the power of the Intel Visual Compute Accelerator in the Intel booth – hall 4, stand B72. We are showing a media processing software solution from Vantrix*, one of our ISV partners, that is running inside a dual-core Intel Xeon processor E5 based Intel® Server System with the Intel Visual Compute Accelerator card installed. The demonstration shows the Intel Visual Compute Accelerator transcoding using both the HEVC and AVC codecs at different bit-rates intended for different devices and networks.


Vantrix is just one of several Intel partners who are building solutions around the Intel Visual Compute Accelerator. Other ISVs who have their solutions running on the Intel Visual Compute Accelerator include ATEME, Ittiam, Vanguard Video* and Haivision*—and you can expect more names to be added to this list soon.


Our hardware partners are also jumping on board. Dell, Supermicro, and Advantech* are among the OEMs that plan to integrate the Intel Visual Compute Accelerator into their server product lines.


The ecosystem support for the Intel VCA signals that industry demand for solutions to address media workloads is high. Intel is working to meet those needs with the Intel Xeon processor E3 v4 with integrated Intel Iris Pro graphics. Partners including HP, Supermicro, Kontron, and Quanta have all released Xeon processor E3 solutions for dense environments, while Artseyn* also has a PCI Express based accelerator add in card similar to the Intel VCA. These Xeon processor E3 solutions all offer improved TCO and competitive performance across a variety of workloads.


To see the Intel Visual Compute Accelerator demo at IBC 215, stop into the Intel booth, No. 4B72. Or to learn more about the card right now, visit

Read more >

Government’s Shifting Role to Protect Citizens in the Digital World

Protect us from Cybercrime.jpgGovernments are having to catch-up with the digital revolution to satisfy their role in providing protection for the common defense.  The world is changing.  Longstanding definitions of responsibilities, rules, and jurisdictions have not kept up with implementation of technology.  One of the traditional roles of government is to provide defense of its citizens and their property.  Constitutions, laws, and courts define these roles and place boundaries limiting them.  With the rapid onset of digital technology, people are communicating more and in new ways, creating massive amounts of information which is being collected and aggregated.  Digital assets and data is itself becoming valuable.  Traditional policies and controls are not suited or sufficient to protect citizen’s information.  Governments are reacting to address the gaps.  This adaptation is pushing the boundaries of scope and in some cases redefining the limitations and precedents derived from an analog era of time.  Flexing to encompass the digital domain within the scope of protection, is necessary to align with expectations of the people. 


Such change however, is slow.  One of the loudest criticisms is the speed in which governments can adapt to sufficiently protect its citizens.  Realistically, it must be as boundaries are tested and redrawn.  In representative rule, there exists a balance between the rights of the citizen and the powers of the government.  Moving too quickly can violate this balance to the detriment of liberty and result in unpleasant outcomes.  Move too slow and masses become victimized, building outcry and dissatisfaction in the state of security.  Bureaucracy is the gatekeeper to keep the pendulum from swinging too fast.


The only thing that saves us from the bureaucracy is its inefficiency – Eugene McCarthy       


The writing is on the wall. Citizens expect government to play a more active role in protecting their digital assets and privacy. Governments are responding. Change is coming across the industry and it will be fueled by litigation and eventually regulatory penalties. Every company, regardless of type, will need to pay much more focus to their cybersecurity.


There are regulatory standards and oversight roles which are being defined as part of the legal structure.  Government agencies are claiming and asserting more powers to establish and enforce cybersecurity standards.  Recently, the U.S Court of Appeals for the Third Circuit upheld the U.S. Federal Trade Commission’s action against companies who had data breaches and reaffirmed the FTC’s authority to hold companies accountable for failing to safeguard consumer data.  The judicial branch interpreted the law in a way which supports the FTC assertion of their role in the digital age. 


Litigation precedents, which act as guiding frameworks, are also being challenged and adapted to influence the responsibility and accountability of customer data.  The long term ramifications of potential misuse of digital assets and personal data are being considered and weighed toward the benefit of consumers.  In a recent case, defendants argued to dismiss a class action but were unsuccessful as the court cited a failure in the “duty to maintain adequate security” which justified the action to continue.  The defendant argued that the plaintiffs suffered no actual injury, but the court rejected those arguments, stating the loss of sensitive personal data was “…sufficient to establish a credible threat of real and immediate harm, or certainly impending injury.”.


In a separate case, the Seventh Circuit and the Ninth Circuit concluded that victims have a legal right to file a lawsuit over the long-term consequences of a data breach.  In addition to reimbursement for fraudulent charges, the court said even those in the class-action lawsuit who did not experience near-term damages have a likelihood of fraud in the future.  The court stated “customers should not have to wait until hackers commit identity theft or credit-card fraud in order to give the class standing.”  Experts believe this shift in litigation precedent is likely to lead to an increase in data breach class actions in cases involving hacking.


This is the macro trend I see.  Governments are stepping up to fill the void where protective oversight does not exist or citizens are not empowered to hold accountable those who have been negligent in protecting their data.  The digital realm has grown so rapidly and encompasses citizens’ lives so deeply, governments are accepting they need to adapt legal structures to protect their populace, but struggling in how to make it a reality.  We will see more of this re-definition across governmental structures worldwide over the next several years as a legal path is forged and tempered.

Twitter: @Matt_Rosenquist

Intel Network: My Previous Posts


Read more >

Top 10 Signs Your Users Are Mobile Ready


Whether you’re planning a project for a mobile business app or developing a mobile business intelligence (BI) strategy, it’s critical to gauge your users’ overall mobile readiness. Even though sales of mobile devices continue to increase, some mobile users show chronic use of PC-era habits.


Yes, the mobile savvy millennial generation is taking the workforce by storm, but they don’t necessarily represent the largest portion of business users. Mobile-ready users, on the other hand, will display at least some of the following characteristics.


DISCLAIMER: All characters appearing in this blog post are fictitious. Any resemblance to real persons, living or dead, is purely coincidental.


10. They Own Smartphones and More

The limited real estate on the smart phone makes the tablet a better candidate for many business applications, especially in mobile BI. Therefore, they may also own a tablet provided by their employers as well as a lot of accessories to improve the usability, such as data entry.


9. They Remember the Password to Unlock Their Screen or App


As funny as this may sound, it usually is a good test of whether the device is being used frequently. Many businesses use device management systems to prevent unauthorized access to enterprise apps and/or corporate data on mobile devices. Therefore, the password to unlock the screen won’t be the only password they will need to remember. Mobile-ready users employ methods to remember different passwords similar to those they use on their PCs.


8. They Use Their Devices as More than a Paperweight

Clearly the decision to purchase tablets or smartphones is a considerable investment for any business. Though mobile devices may be fun to watch movies on, using these devices to their maximum capacity results not only in higher returns on investment (ROIs), but also new opportunities for growth and profitability.


7. They Have Apps Other than Angry Birds Installed


Apps are a good indicator of the basic usage. Whether the device is provided by the business or it’s part of a bring-your-own-device (BYOD) arrangement, there’s nothing wrong with having more personal apps installed than business apps. However, it’s important that required business apps for the user’s role are installed, working correctly, and being used. Delivering these devices to users pre-configured or completing set up remotely will help considerably.


6. They Own Multiple Chargers (and Go Nowhere Without One)


Although mobile device batteries have improved significantly over the years, the more the device is used, the more quickly the battery will need a charge – especially for battery draining business apps (watching movies doesn’t count). A mobile-ready user who heavily depends on his/her device will typically have several chargers and have them placed in strategic locations such as the car, briefcase, or the office. If they stick to a single charger, as some do, they won’t travel anywhere without it.


5. They Meticulously Keep Their Apps Up-To-Date


This is yet another indicator about the usage. Business people are very busy – especially road warriors – and may not have a chance to constantly keep an eye on the updates, opting out for the “Update All” option. However, if the device is not being used frequently, this is one of many neglected areas. As a result, the app may not work because it’s an older version. The idea is not that users should update every hour (or so many times a day), but that they do so at least once a week.


4. They Know How to Back Up Their Device

Although some businesses make the option of backing up corporate apps and data easier, many mobile users may be left on their own to deal with this task. It gets even more complicated in scenarios where the employee uses the device both for personal and business reasons. But the savvy user knows how to back up their data adequately.


3. They Can Afford to Come to Meetings with Only Their Tablet


This is, without a doubt, a good sign of an advanced mobile-ready user. To begin with, the number of days they may forget their mobile device at home will need to stay in single digits. They also come to meetings ready to take notes on their device and/or connect it to the projector. If it’s an online meeting, sharing their mobile device screen won’t be a limitation.


2. They Get Annoyed When They’re Asked to Use Their PCs


These mobile lovers compare PCs to manual typewriters and, simply put, they don’t like anything unless it’s available on a mobile device. They can’t understand why an app doesn’t exist or can’t be developed for it.


1. They Get Upset When People Give Them Paper Copy of Reports


For the users who have really “drunk the mobile Kool-Aid,” anything on paper represents old times and they don’t care much for nostalgia. They argue with equal fervor that paper is not only bad for business but also bad for the environment.


What other signs do you see mobile-ready users exhibit? (Please, no names or dirty jokes.)


This is the final blog in the Mobile BI Strategy Series. Click here to see the others! 


Connect with me on Twitter at @KaanTurnali and LinkedIn.


This story originally appeared on the SAP Analytics Blog.

Read more >

Retiring the Term “Anti-Virus”

Anti-Virus.jpgThe term Anti-Virus or AV is a misnomer and largely misleading to those who are following the cybersecurity industry but unaware of the history of this misused term. Over the years it has become an easy target for marketers to twist into a paper tiger, in hopes of supporting arguments to sell their wares.  It seems to be customary, whenever a vendor comes out with a new host anti-malware product, for them to claim “AV is dead” and their product is superior to signature matching!  Well, such practices are simply dated straw-man arguments, as those venerable anti-virus solutions have evolved in scope and methods, greatly expanded their capabilities, and do so much more than just AV. 


“The report of my death was an exaggeration” – Mark Twain


I have been hearing AV is Dead for years!  I blogged about it in 2012 and it was already an old story, with origins dating back to at least 2006!  The term “AV” was once relevant, but nowadays it is an artifact.  A legacy term which describes early products and their way of protecting endpoints from malicious code.  The term has survived, largely due the marketing value of end-user recognition.  People are familiar with the term “AV” and it is easy to generalize vendors and products under this banner.  But the technology and methods have dramatically changed and solutions no longer exist as they once were.  It references quite old technology when host based anti-malware emerged to detect and clean personal computers from viruses.  Back then, most of the threats were viruses, a specific type of malicious code.  Those viruses were eventually joined by trojans, bots, macros, worms, rootkits, RAT’s, click-jackers, keyloggers, malvertizing, and other unsavory bits of code which could infect a device.  Today we collectively call them ‘malware’. 


Back when AV was a relevant term, the tools typically detected viruses by matching them to known samples.  These signatures, were periodically updated and the AV tool would be run on a regular cadence to check the system for any matches.  Nearly two decades ago, I can remember the weekly virus scan would consume so much of the system resources the user could not do any work.  Scans could take 30 minutes to several hours to complete, depending on the settings and system.  Most people would start the scan and go to lunch or initiate it on their workstation before going home for the evening.  Yes, we all had desktops in those days!  Not very efficient nor user friendly, but then again there were not too many actual viruses to contend with.   


Yes, original AV software relied solely on static signatures and scheduled scans, but those days are long gone.  Times have changed with the explosive growth, pervasiveness, and specialization of malware.  Protection systems run continuously and can receive updates of the latest threats as often as needed throughout the day.  Performance has improved and is unnoticeable most of the time by users.  The sheer quantity of threats is mesmerizing.  The total number of malware has steadily grown at a 150% annual rate and now over 400 million unique samples are known to exist.  As a result, security vendors had to adapt to meet the growing challenge and complexities


Modern client based anti-malware has evolved to include a number of different processes, tools, and techniques to identify harmful and unwanted activities.  It would be unwieldly to rely solely on static signatures of all 400m pieces of known malware and attempt to scan every file against the library.  Computing would grind to a halt.  Instead, current products in the industry leverage a host of different methods and resources to protect endpoints, finding a balance between efficacy, speed, cost, manageability, and user impact.  They will continue to evolve as they always have over time (signature matching, polymorphism, heuristics, machine-learning attribute inspection, peer consensus, community reporting, cloud analysis, file reputation, sandboxing analysis, exploit detection, signature validation, whitelisting, etc.) to meet emerging challenges and customer expectations.  The big players in the industry have the resources to stay at the forefront by organic innovation or through acquisitions. 


New players in the industry, the wonderful startups, are critically important as they spawn and infuse new ideas which will eventually either fizzle-out or prove their worth and find their way into bigger products as companies acquire the technology.  This is the history we have seen and the future we can predict, as even the newest capabilities will eventually be outmaneuvered by malware writers and someday also viewed with an eye of inadequacy. 


Nowadays, when people normally talk about AV, they are really talking about is the use of endpoint anti-malware, which is not going away.  There was a push many years ago to actually abandon client based anti-malware in lieu of network-only controls.  The argument was simple, malware and attackers had to go through the network, therefore a focus on filtering bad traffic would solve the problem.  Droves of industry pundits, myself included, listed a number of reasons why this poorly conceived stratagem was doomed to fail.  Which it did.  At the time, those same “AV is dead” arguments were used in an attempt to convince the public and shift users.  But the fundamentals of security don’t change due to marketing and in the end, to be truly effective, a capability must exist on the endpoint to help protect it. 


Even recently I see stories in the news, talking about the death of AV and how some companies are abandoning AV altogether.  When in fact, as far as I can tell, they are not forsaking endpoint anti-malware but rather simply changing endpoint vendors.  This may include a shift in the mix of different techniques or technologies, but still focused on protecting the host from malicious code.  Practically speaking this is not really a huge deal.  Change is part of adaptation and optimization, but the truth probably fails to get the desired headlines.  Claiming a major transition or the death of a technology is far more attention grabbing.  I see this tactic as a marketing ploy by new product companies and news outlets vying for reader’s eyeballs.  It is a pity as many new innovative companies really have something to add to the market and can stand on their own merits without needed to misrepresent others.  After all, the professional security community is working towards the same goal. 


So I believe it is time to retire the “AV” terminology.  Instead, let’s be more specific and use host or network based anti-malware or just anti-malware for short.  This might limit the creativity of marketing folks who periodically dust off the “AV is Dead” stories for a few more views.  Shifting away from the “AV” terminology to more accurate depictions of modern anti-malware is really for the best, for everyone.


Sound off and let me know what you think?



Twitter: @Matt_Rosenquist

Intel Social Network: My Previous Posts


Read more >

ICApp: Intel IT’s Private Cloud Platform as a Service Architecture


When we comtechnical.jpgposed Intel IT’s original strategic plan for using the Cloud Computing more than six years ago, we adopted a strategy of “Growing the Cloud from the Inside Out.” This means that Intel IT would develop an internal private cloud for many applications and eventually move more and more work out to public clouds. A recent Open Data Center Alliance (ODCA) survey shows that deploying applications to a private cloud is a top priority for many organizations – much higher than deploying to public clouds.  Since that is an enterprise priority, what should a private cloud look like?  In a paper published by the ODCA, Intel’s Cathy Spence gives unveils ICApp, Intel IT’s private cloud Platform as a Service (PaaS) and details the architecture that is made available to internal Intel application developers.


ICApp is built on top of a PaaS framework that is in turn based on a Software Defined Infrastructure.   It is built on two open source projects:  Cloud Foundry ( and Iron Foundry (    The former is built for Linux and the latter is an extension for Windows, which allows a single platform to support multiple development languages.  Users can interact with ICApp through a web site, a Command Line Interface (CLI), or through a RESTful API. The architecture is show in Figure 3 of the paper, included to the right.


ICApp is now deployed inside of Intel, and number of applications have been built on top of it. These applications include a supplier rating system, a self-service application monitor, and a development repository.  As one of the authors of the original strategy/white paper I find it very gratifying that the plan we originated is still being followed for the most part.  Also, since I worked on PlanetLab, an influential predecessor to today’s Cloud, I find that ICapp’s deployment platform web interface looks like one of PlanetLab’s application deployment tools.  You can see that interface in the white paper, which I encourage people to look at for more detail.

Read more >

The Shopper In Control: What Do Customers Want?

The shopper is now firmly in control. This is the central premise determining the direction of retail as we move into the future. Empowered with information that’s easily accessible on smartphones, tablets and other devices, shoppers can quickly find what they want at a price they know to be fair. If you don’t have it, it’s simple enough for them to give their business to your competitor instead. Retailers, well aware of who’s driving, are working hard to identify not only what shoppers want today but what they will want tomorrow. At Intel, we believe that advances in computing technology have a key role to play in helping retailers identify and satisfy consumers’ as yet unmet needs.


So, first things first. What DO shoppers want?

Increasingly, the answer is customization. And it’s worth pausing here to distinguish between customization and personalization. People often use those terms interchangeably, but they’re not the same thing. Here’s how I see the difference: “Customized” is when a customer controls how the product or service is changed. For example, when you go to a coffee shop and order a double soy latte, extra hot with chocolate sprinkles, what you get is a customized product. You, the customer, have determined the end result. “Personalized” is when the decisions about a product or service or experience are made based on knowledge that the retailer has about you. The retailer uses data to make decisions on your behalf. A prime example (forgive the pun) would be Amazon. When Amazon gives you recommendations, they’re giving you a personalized set of suggestions based on your purchase history. They have made the choices for you. This is a key distinction. With help from Big Data analytics, personalization of the shopping experience has begun in retail—we get personalized offers, reminders and so forth But what shoppers say they also want is more customized products.

A Cassandra Report survey1 of 15- to 35-year olds conducted last year reported that 79% of those surveyed said they would like to buy customized products, and they are not able to get them today. This is a huge unmet need—one that we in retail are coming closer to being able to meet. Once we get to more widely available automated manufacturing and 3D printing, we will be able to deliver a lot more customized products. The likelihood is that we’re going to see a combination of customized products delivered with personalized retail experiences. Retailers who are ready for this transformation will win.

What’s Ahead?

A preview of what’s ahead can be found in Tokyo subway stations. There, vending machines with cameras inside look at the person standing in front of the machine, figure out that person’s gender and approximate age and, based on that data, highlight the product that the vending machine thinks the person is most likely to want to buy. Of course, the shopper still has a full choice—if they don’t want that product, they can choose something else—but the machine’s smart technology makes the buying process that much easier. There’s less searching, less waiting—less friction.

Shoppers want minimal friction

They don’t want to wait in lines. They don’t want to have to enter their information to buy online. Shoppers will be loyal to your brand until the moment that they find an alternative where there’s one less step or one less click required to do what they want to do. So, removing friction in the system has become a key focus, and technology has a critical role to play.

For examples of retailers who are successfully reducing friction and redefining retail value with customized products and personalized experiences, check back in this space in the coming weeks.



1 Gen Z: Winter/Spring 2015 Cassandra Report. (30 Mar 2015). Retrieved from


Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

* Other names and brands may be claimed as the property of others.

© 2015 Intel Corporation

Read more >

Healthcare Breaches from Insider Risks: Accidents or Workarounds?

In my last blog, Healthcare Breaches from Loss or Theft of Mobile Devices or Media, I looked at breaches resulting from loss or theft of mobile devices containing sensitive patient data. In this blog I build on this with another very common type of breach that results from healthcare employee accidents or workarounds. In this case a workaround is defined as a well-intended action the employee takes to get their job done but that is out of compliance with the privacy and security policy of the healthcare organization and adds risk.


The Ponemon 2015 Cost of a Data Breach: United States study reveals that 19 percent of all breaches across industries, including healthcare, are caused by human error. A further 32 percent are caused by system glitches that include both IT and business process failures, in which human error can be a key contributing factor. The total average cost of a single data breach event is $6.53 million, or $398 per patient record (the highest across all industries).


In a previous blog Is Your Healthcare Security Friendly? I discussed how if usability in healthcare solutions is lacking, or security is cumbersome, it can drive the use of workarounds. The use of workarounds is further exacerbated with so many BYOD options and apps now available, giving well intentioned healthcare workers amazing new tools to improve the quality and lower the cost of care, but these tools often were not designed for healthcare and add significant additional risk and in a worst case lead to breaches.


An example of this type of breach is shown in the info graphic below where the first failure is ineffective security awareness training for healthcare workers on how to avoid accidents and workarounds. The second failure is usability is lacking in a solution used by healthcare workers, or security is too cumbersome for example too many logins, or the healthcare IT department is perceived by healthcare workers to be too slow or overly restrictive in enabling new technologies. A 2014 HIMSS Analytics Study Curbing Healthcare Workarounds: Driving Efficient Co-Worker Collaboration reveals that 32 percent of workers use workarounds every day, and 25 percent use workarounds sometimes.


Keeping in mind that any one of these could result in a breach this is a staggering finding and highlights how common workarounds are and how significant the associated privacy and security risks are. The third failure leading to breach in this example involves the healthcare worker using a BYOD device such as a smartphone with an app that has a cloud backend, in order to collaborate with a healthcare co-worker. An example of this could be a healthcare worker taking a photo of a patient and attempting to use a file transfer app to share it with a colleague. In step four any data the healthcare worker puts into the app, or data collected by the app itself such as location history, is sent to the app backend or “side cloud” where in step 5 it is accessed by unauthorized individuals leading to a breach.


David_Security sept.png


Security is complex, and there are many safeguards required to effectively mitigate this type of breach. Maturity models have achieved wide adoption and success in healthcare, for example the HIMSS EMRAM (EMR Adoption Model) has been used by 5300+ provider organizations worldwide. Maturity models are a great way to simplify complexity and enable rapid assessment of where you are and what you need to do to improve.


In the infographic above, beneath the sequence of events leading to this type of breach, is a breach focused maturity model that can be used to rapidly assess your security posture and determine next steps to further reduce residual risk. There are three levels in this maturity model, Baseline includes orange capabilities, Enhanced adds yellow capabilities, and Advanced adds green capabilities. Only safeguards relevant to mitigating this type of breach are colored in this maturity model. Other grayed out blocks, while important in mitigating risk of other types of breaches, do not play a significant role in mitigating risk of breaches from insider accidents or workarounds. There are many risks in healthcare privacy and security. This model is focused on breaches. A holistic approach is required for effective security, including administrative, physical and technical safeguards. This maturity model is focused mostly on technical safeguards. Below I briefly review each of the safeguards relevant to this type of breach.


A baseline level of technical safeguards for basic mitigation of healthcare breaches from insider risks requires:


  • User Awareness Training: educates healthcare workers on how to be privacy and security savvy in delivering healthcare, and the risk of accidents and workarounds, and viable safer alternatives
  • Device Control: prevents the unauthorized use of removable media, for example USB sticks that workers may attempt to use to move sensitive patient data unsecured
  • Mobile Device Management: keeps mobile devices secure, including BYOD devices used by healthcare workers, addressing risks including patient data loss or unauthorized access
  • Anti-Malware: detects and remediates malware infections of healthcare worker devices, including malware employees may accidentally encounter on infected websites or apps
  • DLP Discovery: discovers where sensitive patient data is at rest and how it moves over the network, a key first step in an ongoing inventory of sensitive data you need to protect. This can be used to detect unsecured sensitive data and uncover accidents or workarounds leading to it, enabling correction before a breach
  • Vulnerability Management and Patching: involves proactively identifying vulnerabilities and patching them to close security holes before they can lead to a breach. This is particularly important with healthcare worker devices used to access the Internet and at risk of being exposed to malware and attacks
  • Email Gateway:  enables you to catch unsecured patient data attached to emails and also defends against malware attached to emails, and phishing attacks
  • Web Gateway: can detect malware from healthcare workers web browsing the Internet, and defend against attempted drive-by-downloads that may otherwise lead to data loss and breach


An enhanced level of technical safeguards for further improved mitigation of risk of this type of healthcare breach requires addition of:


  • Secure Remote Administration: enables healthcare IT to efficiently, securely and remotely administer endpoint devices so they are up to date with the latest patches and safeguards to defend against breaches from accidents and workarounds
  • Endpoint DLP: Data Loss Prevention enforced on endpoint devices to monitor and address day-to-day end-user risky actions that can lead to accidents, or be used in workarounds
  • Policy Based File Encryption: can automatically encrypt files containing sensitive healthcare data based on policy and protect the confidentiality of those files even if put at risk in an accident or workaround
  • Network DLP Monitor / Capture: enables healthcare security to gather information about data usage patterns, enabling proactive risk identification, and better decisions on how to mitigate


An advanced level of security for further mitigation of risk of this type of breach adds:


  • Network DLP Prevention: ensures that sensitive healthcare data only leaves the healthcare network when appropriate, and helps defend against loss of sensitive healthcare information from accidents or workarounds
  • Digital Forensics: enables you to determine in the event of an accident or workaround whether a breach actually occurred, and if so the nature of the breach, and exact scope of data compromised


Healthcare security budgets are limited. Building security is an ongoing process. The maturity model approach discusses here can be used in a multi-year incremental approach to improve breach security while keeping within limited annual budgets and resource constraints.


What questions on healthcare security do you have?

Read more >