5 Questions for Dr. Charles Macias, Texas Children’s Hospital


Dr.  Charles Macias is the Chief Clinical Systems Integration Office for Texas Children’s Hospital in Houston and a leading proponent of population health analytics. In his practice as an emergency room physician, Macias has seen first-hand the impact of population health and the potential it has to streamline workflows and improve outcomes. We recently sat down with him to discuss his views on population health analytics and where it is headed in the future.


Intel: What is your definition of population health analytics?


Macias: Population health analytics really refers to how an organization, or government, is addressing the healthcare issues of a population at large. While many people think of population health as an entire region, state or country, there’s variable definitions for how we could parse out one’s segment of a population. In my particular setting, for example, we service the pediatric population up to age 21. Our definition of population health is really about what’s happening to children.


Intel: In another blog you told the story of a young asthma patient. How does that experience years ago compare to today in terms of analytics?


Macias: From a population health perspective, in 2004, when that story took place, population health really wasn’t about population health; it was about treating single patients. That was a paper-based world. We had to depend on published research to understand something about the populations, and when you depend only on the published evidence, you’re assuming that somewhere out in this periphery of research you’re going to be able to translate it down to a population that looks like your own. So, if that direct connection doesn’t exist, if your population is very different, you’re at odds with what you’re really going to know about how to treat your population. Today, the story is very different. Today, we have electronic medical records. Today, we have an electronic data warehouse. We can store data and information about our populations. What used to take me six months to find out now can take about 24 hours thanks to updates in our enterprise data warehouse. I have the answer at my fingertips.


Intel: Today in your practice, how do analytics impact your workflow?


Macias: Analytics today has a completely different impact than it did on clinicians five years ago, 10 years ago, and certainly 20 years ago. Number one, it’s given us the understanding that the 800,000 medical articles that are out there that are essentially non-digestible bits of information. They can systematically be filtered into some kind of clinical standards that can be placed into the analytics and matched against the analytics to say this population parallels what this evidence is telling us and, therefore, this clinical standard should really interdigitate with that work and we should understand how that population fits in with that clinical standard. So, now we have the ability to use best practice alerts, health maintenance reminders, and create long term plans of care embedded directly within the medical record.


Intel: What’s your vision for the future of analytics?


Macias: My vision for analytics is in the world of decision support. It’s really about making clinicians’ workflow much smarter and quicker, and much easier. We already know that when we start a day, we have so many patients to see. In my setting I know I’m going to be overwhelmed with a number of patients in the emergency department. If there are ways to translate the work that’s ongoing, the workflow within the EMR to the kind of decision support that’s going to make prediction rules and strategies much easier, that’s going identify the patients at risk for bad outcomes and link them to the right strategies that will help obviate a need for much more escalated care in the future. That’s a win/win. As we begin to place resources against the value that’s given, I see a lot better alignment with where our healthcare infrastructure supports those strategies.


Intel: How do you work with Health Catalyst to get the information you need?


Macias: The role that Health Catalyst has had in our data governance has been critical to evolving to where we are as an organization. We have learned from how we look at populations of care and how we look at our approaches to merging the science of care with operational care process teams. Predictive analytics comes from how we house data in our enterprise data warehouse. It really goes beyond the EMR’s capability of doing bedside analytics; it’s about the bigger picture of integrating all of those critical domains to effectively improve outcomes. It would not have been possible without our partnership with Health Catalyst.

Read more >

Genomic Sequencing is Coming to a Clinical Workflow Near You. Are You Ready?

Genome sequencing has moved from bench research into clinical care—and it’s producing medical miracles. Now, it’s time to make genome sequencing an everyday part of our clinical workflows. That day is closer than you might think.


Those were the messages James Lowey shared at HIMSS 2015 in Chicago. As VP of technology at TGen—the nonprofit Translational Genomics Research Institute—James is at the forefront of efforts to bring next-generation genomic sequencing into the clinical mainstream and use it to transform the way we diagnose, treat, and prevent illness.


At the HIMSS session, James described the broad range of areas of clinical interest for genomic data. He also discussed the compute infrastructure necessary to provide cost-effective performance and scalability for large-scale production processing of genomic data.


Recently, our healthcare team interviewed James to learn more about his TGen’s strategy. In this case study, James tells us where he thinks we’re heading—and how fast we’re getting there. He also highlights social and policy issues that must be addressed, and points out the need for ongoing research to establish evidence-based clinical protocols.




I hope you’ll read the TGen case study and join the conversation. Is your organization incorporating genomic analysis into clinical workflows? If so, can you share any advice or best practices? If not, how close are you? What are your next steps? What’s holding you back? Let me know in the comments section. Or become a member of the Intel Health and Life Sciences Community.


Learn more about Intel® Health & Life Sciences.


Download the session handout from HIMSS 2015, Using Genomic Data to Make a Difference in Clinical Care.


Stay in touch: @IntelHealth, @hankinjoan

Read more >

New Intel Visual Compute Accelerator Makes Its Debut

For service providers, the rapid momentum of video streaming is both a plus and a minus.  On the plus side, millions of consumers are now looking to service providers to deliver content they used to access through other channels. That’s all good news for the business model and the bottom line.


On the minus side, service providers now have to meet the growing demands of bandwidth-hungry video streams, including new 4K media streaming formats. As I mentioned in a recent blog post, Video Processing Doesn’t Have To Kill the Data Center, today’s 4K streams come with a mind-boggling 8 million pixels per frame. And if you think today’s video workloads are bad, just stayed turned for more. Within five years, video will consume 80 percent of the world’s Internet bandwidth.


While meeting today’s growing bandwidth demands, service providers simultaneously have to deal with an ever-larger range of end-user devices with wide variances in their bit rates and bandwidth requirements. When customers order up videos, service providers have to be poised to deliver the goods in many different ways, which forces them to store multiple copies of content—driving up storage costs.


At Intel, we are working to help service providers solve the challenges of the minus side of this equation so they can gain greater benefits from the plus side. To that end, we are rolling out a new processing solution that promises to accelerate video transcoding workloads while helping service providers contain their total cost of ownership.


This solution, announced today at the IBC 2015 conference in Amsterdam, is called the Intel® Visual Compute Accelerator. It’s an Intel® Xeon® processor E3 based media processing PCI Express* (PCIe*) add-in card that brings media and graphics capabilities into Intel® Xeon® processor E5 based servers. We’re talking about 4K Ultra High Definition (UHD) media processing capabilities.


A few specifics: The card contains three Intel Xeon processor E3 v4 CPUs, which each contain the Intel® Iris™ Pro graphics P6300 GPU. Placing these CPUs on a Gen3 x16 PCIe card provides high throughput and low latency when moving data to and from the card.




The Intel Visual Compute Accelerator is designed for cloud and communications service providers who are implementing High Efficiency Video Coding (HEVC), which is expected to be needed for 4K/UHD videos, and Advanced Video Coding (AVC) media processing solutions, whether in the cloud or in their networks.


We expect that the Intel Visual Compute Accelerator will provide customers with excellent TCO when looking at cost per watts per transcode. Having both a CPU and a GPU on the same chip (as compared to just a GPU) enables ISVs to build solutions that improve software quality while accelerating high-end media transcoding workloads.


If you happen to be at IBC 2015 this week, you can get a firsthand look at the power of the Intel Visual Compute Accelerator in the Intel booth – hall 4, stand B72. We are showing a media processing software solution from Vantrix*, one of our ISV partners, that is running inside a dual-core Intel Xeon processor E5 based Intel® Server System with the Intel Visual Compute Accelerator card installed. The demonstration shows the Intel Visual Compute Accelerator transcoding using both the HEVC and AVC codecs at different bit-rates intended for different devices and networks.


Vantrix is just one of several Intel partners who are building solutions around the Intel Visual Compute Accelerator. Other ISVs who have their solutions running on the Intel Visual Compute Accelerator include ATEME, Ittiam, Vanguard Video* and Haivision*—and you can expect more names to be added to this list soon.


Our hardware partners are also jumping on board. Dell, Supermicro, and Advantech* are among the OEMs that plan to integrate the Intel Visual Compute Accelerator into their server product lines.


The ecosystem support for the Intel VCA signals that industry demand for solutions to address media workloads is high. Intel is working to meet those needs with the Intel Xeon processor E3 v4 with integrated Intel Iris Pro graphics. Partners including HP, Supermicro, Kontron, and Quanta have all released Xeon processor E3 solutions for dense environments, while Artseyn* also has a PCI Express based accelerator add in card similar to the Intel VCA. These Xeon processor E3 solutions all offer improved TCO and competitive performance across a variety of workloads.


To see the Intel Visual Compute Accelerator demo at IBC 215, stop into the Intel booth, No. 4B72. Or to learn more about the card right now, visit

Read more >

Government’s Shifting Role to Protect Citizens in the Digital World

Protect us from Cybercrime.jpgGovernments are having to catch-up with the digital revolution to satisfy their role in providing protection for the common defense.  The world is changing.  Longstanding definitions of responsibilities, rules, and jurisdictions have not kept up with implementation of technology.  One of the traditional roles of government is to provide defense of its citizens and their property.  Constitutions, laws, and courts define these roles and place boundaries limiting them.  With the rapid onset of digital technology, people are communicating more and in new ways, creating massive amounts of information which is being collected and aggregated.  Digital assets and data is itself becoming valuable.  Traditional policies and controls are not suited or sufficient to protect citizen’s information.  Governments are reacting to address the gaps.  This adaptation is pushing the boundaries of scope and in some cases redefining the limitations and precedents derived from an analog era of time.  Flexing to encompass the digital domain within the scope of protection, is necessary to align with expectations of the people. 


Such change however, is slow.  One of the loudest criticisms is the speed in which governments can adapt to sufficiently protect its citizens.  Realistically, it must be as boundaries are tested and redrawn.  In representative rule, there exists a balance between the rights of the citizen and the powers of the government.  Moving too quickly can violate this balance to the detriment of liberty and result in unpleasant outcomes.  Move too slow and masses become victimized, building outcry and dissatisfaction in the state of security.  Bureaucracy is the gatekeeper to keep the pendulum from swinging too fast.


The only thing that saves us from the bureaucracy is its inefficiency – Eugene McCarthy       


The writing is on the wall. Citizens expect government to play a more active role in protecting their digital assets and privacy. Governments are responding. Change is coming across the industry and it will be fueled by litigation and eventually regulatory penalties. Every company, regardless of type, will need to pay much more focus to their cybersecurity.


There are regulatory standards and oversight roles which are being defined as part of the legal structure.  Government agencies are claiming and asserting more powers to establish and enforce cybersecurity standards.  Recently, the U.S Court of Appeals for the Third Circuit upheld the U.S. Federal Trade Commission’s action against companies who had data breaches and reaffirmed the FTC’s authority to hold companies accountable for failing to safeguard consumer data.  The judicial branch interpreted the law in a way which supports the FTC assertion of their role in the digital age. 


Litigation precedents, which act as guiding frameworks, are also being challenged and adapted to influence the responsibility and accountability of customer data.  The long term ramifications of potential misuse of digital assets and personal data are being considered and weighed toward the benefit of consumers.  In a recent case, defendants argued to dismiss a class action but were unsuccessful as the court cited a failure in the “duty to maintain adequate security” which justified the action to continue.  The defendant argued that the plaintiffs suffered no actual injury, but the court rejected those arguments, stating the loss of sensitive personal data was “…sufficient to establish a credible threat of real and immediate harm, or certainly impending injury.”.


In a separate case, the Seventh Circuit and the Ninth Circuit concluded that victims have a legal right to file a lawsuit over the long-term consequences of a data breach.  In addition to reimbursement for fraudulent charges, the court said even those in the class-action lawsuit who did not experience near-term damages have a likelihood of fraud in the future.  The court stated “customers should not have to wait until hackers commit identity theft or credit-card fraud in order to give the class standing.”  Experts believe this shift in litigation precedent is likely to lead to an increase in data breach class actions in cases involving hacking.


This is the macro trend I see.  Governments are stepping up to fill the void where protective oversight does not exist or citizens are not empowered to hold accountable those who have been negligent in protecting their data.  The digital realm has grown so rapidly and encompasses citizens’ lives so deeply, governments are accepting they need to adapt legal structures to protect their populace, but struggling in how to make it a reality.  We will see more of this re-definition across governmental structures worldwide over the next several years as a legal path is forged and tempered.

Twitter: @Matt_Rosenquist

Intel Network: My Previous Posts


Read more >

Top 10 Signs Your Users Are Mobile Ready


Whether you’re planning a project for a mobile business app or developing a mobile business intelligence (BI) strategy, it’s critical to gauge your users’ overall mobile readiness. Even though sales of mobile devices continue to increase, some mobile users show chronic use of PC-era habits.


Yes, the mobile savvy millennial generation is taking the workforce by storm, but they don’t necessarily represent the largest portion of business users. Mobile-ready users, on the other hand, will display at least some of the following characteristics.


DISCLAIMER: All characters appearing in this blog post are fictitious. Any resemblance to real persons, living or dead, is purely coincidental.


10. They Own Smartphones and More

The limited real estate on the smart phone makes the tablet a better candidate for many business applications, especially in mobile BI. Therefore, they may also own a tablet provided by their employers as well as a lot of accessories to improve the usability, such as data entry.


9. They Remember the Password to Unlock Their Screen or App


As funny as this may sound, it usually is a good test of whether the device is being used frequently. Many businesses use device management systems to prevent unauthorized access to enterprise apps and/or corporate data on mobile devices. Therefore, the password to unlock the screen won’t be the only password they will need to remember. Mobile-ready users employ methods to remember different passwords similar to those they use on their PCs.


8. They Use Their Devices as More than a Paperweight

Clearly the decision to purchase tablets or smartphones is a considerable investment for any business. Though mobile devices may be fun to watch movies on, using these devices to their maximum capacity results not only in higher returns on investment (ROIs), but also new opportunities for growth and profitability.


7. They Have Apps Other than Angry Birds Installed


Apps are a good indicator of the basic usage. Whether the device is provided by the business or it’s part of a bring-your-own-device (BYOD) arrangement, there’s nothing wrong with having more personal apps installed than business apps. However, it’s important that required business apps for the user’s role are installed, working correctly, and being used. Delivering these devices to users pre-configured or completing set up remotely will help considerably.


6. They Own Multiple Chargers (and Go Nowhere Without One)


Although mobile device batteries have improved significantly over the years, the more the device is used, the more quickly the battery will need a charge – especially for battery draining business apps (watching movies doesn’t count). A mobile-ready user who heavily depends on his/her device will typically have several chargers and have them placed in strategic locations such as the car, briefcase, or the office. If they stick to a single charger, as some do, they won’t travel anywhere without it.


5. They Meticulously Keep Their Apps Up-To-Date


This is yet another indicator about the usage. Business people are very busy – especially road warriors – and may not have a chance to constantly keep an eye on the updates, opting out for the “Update All” option. However, if the device is not being used frequently, this is one of many neglected areas. As a result, the app may not work because it’s an older version. The idea is not that users should update every hour (or so many times a day), but that they do so at least once a week.


4. They Know How to Back Up Their Device

Although some businesses make the option of backing up corporate apps and data easier, many mobile users may be left on their own to deal with this task. It gets even more complicated in scenarios where the employee uses the device both for personal and business reasons. But the savvy user knows how to back up their data adequately.


3. They Can Afford to Come to Meetings with Only Their Tablet


This is, without a doubt, a good sign of an advanced mobile-ready user. To begin with, the number of days they may forget their mobile device at home will need to stay in single digits. They also come to meetings ready to take notes on their device and/or connect it to the projector. If it’s an online meeting, sharing their mobile device screen won’t be a limitation.


2. They Get Annoyed When They’re Asked to Use Their PCs


These mobile lovers compare PCs to manual typewriters and, simply put, they don’t like anything unless it’s available on a mobile device. They can’t understand why an app doesn’t exist or can’t be developed for it.


1. They Get Upset When People Give Them Paper Copy of Reports


For the users who have really “drunk the mobile Kool-Aid,” anything on paper represents old times and they don’t care much for nostalgia. They argue with equal fervor that paper is not only bad for business but also bad for the environment.


What other signs do you see mobile-ready users exhibit? (Please, no names or dirty jokes.)


This is the final blog in the Mobile BI Strategy Series. Click here to see the others! 


Connect with me on Twitter at @KaanTurnali and LinkedIn.


This story originally appeared on the SAP Analytics Blog.

Read more >

Retiring the Term “Anti-Virus”

Anti-Virus.jpgThe term Anti-Virus or AV is a misnomer and largely misleading to those who are following the cybersecurity industry but unaware of the history of this misused term. Over the years it has become an easy target for marketers to twist into a paper tiger, in hopes of supporting arguments to sell their wares.  It seems to be customary, whenever a vendor comes out with a new host anti-malware product, for them to claim “AV is dead” and their product is superior to signature matching!  Well, such practices are simply dated straw-man arguments, as those venerable anti-virus solutions have evolved in scope and methods, greatly expanded their capabilities, and do so much more than just AV. 


“The report of my death was an exaggeration” – Mark Twain


I have been hearing AV is Dead for years!  I blogged about it in 2012 and it was already an old story, with origins dating back to at least 2006!  The term “AV” was once relevant, but nowadays it is an artifact.  A legacy term which describes early products and their way of protecting endpoints from malicious code.  The term has survived, largely due the marketing value of end-user recognition.  People are familiar with the term “AV” and it is easy to generalize vendors and products under this banner.  But the technology and methods have dramatically changed and solutions no longer exist as they once were.  It references quite old technology when host based anti-malware emerged to detect and clean personal computers from viruses.  Back then, most of the threats were viruses, a specific type of malicious code.  Those viruses were eventually joined by trojans, bots, macros, worms, rootkits, RAT’s, click-jackers, keyloggers, malvertizing, and other unsavory bits of code which could infect a device.  Today we collectively call them ‘malware’. 


Back when AV was a relevant term, the tools typically detected viruses by matching them to known samples.  These signatures, were periodically updated and the AV tool would be run on a regular cadence to check the system for any matches.  Nearly two decades ago, I can remember the weekly virus scan would consume so much of the system resources the user could not do any work.  Scans could take 30 minutes to several hours to complete, depending on the settings and system.  Most people would start the scan and go to lunch or initiate it on their workstation before going home for the evening.  Yes, we all had desktops in those days!  Not very efficient nor user friendly, but then again there were not too many actual viruses to contend with.   


Yes, original AV software relied solely on static signatures and scheduled scans, but those days are long gone.  Times have changed with the explosive growth, pervasiveness, and specialization of malware.  Protection systems run continuously and can receive updates of the latest threats as often as needed throughout the day.  Performance has improved and is unnoticeable most of the time by users.  The sheer quantity of threats is mesmerizing.  The total number of malware has steadily grown at a 150% annual rate and now over 400 million unique samples are known to exist.  As a result, security vendors had to adapt to meet the growing challenge and complexities


Modern client based anti-malware has evolved to include a number of different processes, tools, and techniques to identify harmful and unwanted activities.  It would be unwieldly to rely solely on static signatures of all 400m pieces of known malware and attempt to scan every file against the library.  Computing would grind to a halt.  Instead, current products in the industry leverage a host of different methods and resources to protect endpoints, finding a balance between efficacy, speed, cost, manageability, and user impact.  They will continue to evolve as they always have over time (signature matching, polymorphism, heuristics, machine-learning attribute inspection, peer consensus, community reporting, cloud analysis, file reputation, sandboxing analysis, exploit detection, signature validation, whitelisting, etc.) to meet emerging challenges and customer expectations.  The big players in the industry have the resources to stay at the forefront by organic innovation or through acquisitions. 


New players in the industry, the wonderful startups, are critically important as they spawn and infuse new ideas which will eventually either fizzle-out or prove their worth and find their way into bigger products as companies acquire the technology.  This is the history we have seen and the future we can predict, as even the newest capabilities will eventually be outmaneuvered by malware writers and someday also viewed with an eye of inadequacy. 


Nowadays, when people normally talk about AV, they are really talking about is the use of endpoint anti-malware, which is not going away.  There was a push many years ago to actually abandon client based anti-malware in lieu of network-only controls.  The argument was simple, malware and attackers had to go through the network, therefore a focus on filtering bad traffic would solve the problem.  Droves of industry pundits, myself included, listed a number of reasons why this poorly conceived stratagem was doomed to fail.  Which it did.  At the time, those same “AV is dead” arguments were used in an attempt to convince the public and shift users.  But the fundamentals of security don’t change due to marketing and in the end, to be truly effective, a capability must exist on the endpoint to help protect it. 


Even recently I see stories in the news, talking about the death of AV and how some companies are abandoning AV altogether.  When in fact, as far as I can tell, they are not forsaking endpoint anti-malware but rather simply changing endpoint vendors.  This may include a shift in the mix of different techniques or technologies, but still focused on protecting the host from malicious code.  Practically speaking this is not really a huge deal.  Change is part of adaptation and optimization, but the truth probably fails to get the desired headlines.  Claiming a major transition or the death of a technology is far more attention grabbing.  I see this tactic as a marketing ploy by new product companies and news outlets vying for reader’s eyeballs.  It is a pity as many new innovative companies really have something to add to the market and can stand on their own merits without needed to misrepresent others.  After all, the professional security community is working towards the same goal. 


So I believe it is time to retire the “AV” terminology.  Instead, let’s be more specific and use host or network based anti-malware or just anti-malware for short.  This might limit the creativity of marketing folks who periodically dust off the “AV is Dead” stories for a few more views.  Shifting away from the “AV” terminology to more accurate depictions of modern anti-malware is really for the best, for everyone.


Sound off and let me know what you think?



Twitter: @Matt_Rosenquist

Intel Social Network: My Previous Posts


Read more >

ICApp: Intel IT’s Private Cloud Platform as a Service Architecture


When we comtechnical.jpgposed Intel IT’s original strategic plan for using the Cloud Computing more than six years ago, we adopted a strategy of “Growing the Cloud from the Inside Out.” This means that Intel IT would develop an internal private cloud for many applications and eventually move more and more work out to public clouds. A recent Open Data Center Alliance (ODCA) survey shows that deploying applications to a private cloud is a top priority for many organizations – much higher than deploying to public clouds.  Since that is an enterprise priority, what should a private cloud look like?  In a paper published by the ODCA, Intel’s Cathy Spence gives unveils ICApp, Intel IT’s private cloud Platform as a Service (PaaS) and details the architecture that is made available to internal Intel application developers.


ICApp is built on top of a PaaS framework that is in turn based on a Software Defined Infrastructure.   It is built on two open source projects:  Cloud Foundry ( and Iron Foundry (    The former is built for Linux and the latter is an extension for Windows, which allows a single platform to support multiple development languages.  Users can interact with ICApp through a web site, a Command Line Interface (CLI), or through a RESTful API. The architecture is show in Figure 3 of the paper, included to the right.


ICApp is now deployed inside of Intel, and number of applications have been built on top of it. These applications include a supplier rating system, a self-service application monitor, and a development repository.  As one of the authors of the original strategy/white paper I find it very gratifying that the plan we originated is still being followed for the most part.  Also, since I worked on PlanetLab, an influential predecessor to today’s Cloud, I find that ICapp’s deployment platform web interface looks like one of PlanetLab’s application deployment tools.  You can see that interface in the white paper, which I encourage people to look at for more detail.

Read more >

Healthcare Breaches from Insider Risks: Accidents or Workarounds?

In my last blog, Healthcare Breaches from Loss or Theft of Mobile Devices or Media, I looked at breaches resulting from loss or theft of mobile devices containing sensitive patient data. In this blog I build on this with another very common type of breach that results from healthcare employee accidents or workarounds. In this case a workaround is defined as a well-intended action the employee takes to get their job done but that is out of compliance with the privacy and security policy of the healthcare organization and adds risk.


The Ponemon 2015 Cost of a Data Breach: United States study reveals that 19 percent of all breaches across industries, including healthcare, are caused by human error. A further 32 percent are caused by system glitches that include both IT and business process failures, in which human error can be a key contributing factor. The total average cost of a single data breach event is $6.53 million, or $398 per patient record (the highest across all industries).


In a previous blog Is Your Healthcare Security Friendly? I discussed how if usability in healthcare solutions is lacking, or security is cumbersome, it can drive the use of workarounds. The use of workarounds is further exacerbated with so many BYOD options and apps now available, giving well intentioned healthcare workers amazing new tools to improve the quality and lower the cost of care, but these tools often were not designed for healthcare and add significant additional risk and in a worst case lead to breaches.


An example of this type of breach is shown in the info graphic below where the first failure is ineffective security awareness training for healthcare workers on how to avoid accidents and workarounds. The second failure is usability is lacking in a solution used by healthcare workers, or security is too cumbersome for example too many logins, or the healthcare IT department is perceived by healthcare workers to be too slow or overly restrictive in enabling new technologies. A 2014 HIMSS Analytics Study Curbing Healthcare Workarounds: Driving Efficient Co-Worker Collaboration reveals that 32 percent of workers use workarounds every day, and 25 percent use workarounds sometimes.


Keeping in mind that any one of these could result in a breach this is a staggering finding and highlights how common workarounds are and how significant the associated privacy and security risks are. The third failure leading to breach in this example involves the healthcare worker using a BYOD device such as a smartphone with an app that has a cloud backend, in order to collaborate with a healthcare co-worker. An example of this could be a healthcare worker taking a photo of a patient and attempting to use a file transfer app to share it with a colleague. In step four any data the healthcare worker puts into the app, or data collected by the app itself such as location history, is sent to the app backend or “side cloud” where in step 5 it is accessed by unauthorized individuals leading to a breach.


David_Security sept.png


Security is complex, and there are many safeguards required to effectively mitigate this type of breach. Maturity models have achieved wide adoption and success in healthcare, for example the HIMSS EMRAM (EMR Adoption Model) has been used by 5300+ provider organizations worldwide. Maturity models are a great way to simplify complexity and enable rapid assessment of where you are and what you need to do to improve.


In the infographic above, beneath the sequence of events leading to this type of breach, is a breach focused maturity model that can be used to rapidly assess your security posture and determine next steps to further reduce residual risk. There are three levels in this maturity model, Baseline includes orange capabilities, Enhanced adds yellow capabilities, and Advanced adds green capabilities. Only safeguards relevant to mitigating this type of breach are colored in this maturity model. Other grayed out blocks, while important in mitigating risk of other types of breaches, do not play a significant role in mitigating risk of breaches from insider accidents or workarounds. There are many risks in healthcare privacy and security. This model is focused on breaches. A holistic approach is required for effective security, including administrative, physical and technical safeguards. This maturity model is focused mostly on technical safeguards. Below I briefly review each of the safeguards relevant to this type of breach.


A baseline level of technical safeguards for basic mitigation of healthcare breaches from insider risks requires:


  • User Awareness Training: educates healthcare workers on how to be privacy and security savvy in delivering healthcare, and the risk of accidents and workarounds, and viable safer alternatives
  • Device Control: prevents the unauthorized use of removable media, for example USB sticks that workers may attempt to use to move sensitive patient data unsecured
  • Mobile Device Management: keeps mobile devices secure, including BYOD devices used by healthcare workers, addressing risks including patient data loss or unauthorized access
  • Anti-Malware: detects and remediates malware infections of healthcare worker devices, including malware employees may accidentally encounter on infected websites or apps
  • DLP Discovery: discovers where sensitive patient data is at rest and how it moves over the network, a key first step in an ongoing inventory of sensitive data you need to protect. This can be used to detect unsecured sensitive data and uncover accidents or workarounds leading to it, enabling correction before a breach
  • Vulnerability Management and Patching: involves proactively identifying vulnerabilities and patching them to close security holes before they can lead to a breach. This is particularly important with healthcare worker devices used to access the Internet and at risk of being exposed to malware and attacks
  • Email Gateway:  enables you to catch unsecured patient data attached to emails and also defends against malware attached to emails, and phishing attacks
  • Web Gateway: can detect malware from healthcare workers web browsing the Internet, and defend against attempted drive-by-downloads that may otherwise lead to data loss and breach


An enhanced level of technical safeguards for further improved mitigation of risk of this type of healthcare breach requires addition of:


  • Secure Remote Administration: enables healthcare IT to efficiently, securely and remotely administer endpoint devices so they are up to date with the latest patches and safeguards to defend against breaches from accidents and workarounds
  • Endpoint DLP: Data Loss Prevention enforced on endpoint devices to monitor and address day-to-day end-user risky actions that can lead to accidents, or be used in workarounds
  • Policy Based File Encryption: can automatically encrypt files containing sensitive healthcare data based on policy and protect the confidentiality of those files even if put at risk in an accident or workaround
  • Network DLP Monitor / Capture: enables healthcare security to gather information about data usage patterns, enabling proactive risk identification, and better decisions on how to mitigate


An advanced level of security for further mitigation of risk of this type of breach adds:


  • Network DLP Prevention: ensures that sensitive healthcare data only leaves the healthcare network when appropriate, and helps defend against loss of sensitive healthcare information from accidents or workarounds
  • Digital Forensics: enables you to determine in the event of an accident or workaround whether a breach actually occurred, and if so the nature of the breach, and exact scope of data compromised


Healthcare security budgets are limited. Building security is an ongoing process. The maturity model approach discusses here can be used in a multi-year incremental approach to improve breach security while keeping within limited annual budgets and resource constraints.


What questions on healthcare security do you have?

Read more >

Better Together: Balanced System Performance Through Network Innovation

Dawn Moore, GM Networking Division


Data center application performance today uses balanced system performance based on a combination of CPU power, faster storage and high-throughput networks; upgrading just one of these elements will not maximize your data center performance.


This wasn’t always the case. In years past, some IT managers could postpone network upgrades because slow storage would limit overall system performance. But now, with much faster solid state-state drives (SSDs), the performance bottleneck has shifted from the hard drive to the network.


This means that in today’s IT environment—with hyperscale data centers and virtualized servers—it’s crucial that upgrading to the latest technology, like faster SSDs or 10/40GbE, be viewed from a comprehensive systems viewpoint.


Certainly, upgrading to a server with a new Intel® Xeon® Processor E5-2600 v3 CPU will provide improved performance. Similarly, swapping out a hard drive for an SSD or upgrading from 1GbE to 10GbE will improve performance.


Two recent whitepapers highlight how maximum performance depends on the interconnected nature of these systems. If the entire system isn’t upgraded, then the data center doesn’t get the best return from a new server investment.


The first paper* discusses the improvements in raw performance that can be seen in a complete upgrade. For example, when an older server with SATA SSDs and a single 10GbE NIC was replaced with a new Intel® Xeon® processor E5-2695 v3 based server, a PCIe SSD, and four 10GbE ports, the new system delivered 54% more transactions per minute and 42.4% more throughput, as well as much faster response times in these tests.


What can be done with this raw performance increase? The other whitepaper** answers that question by researching the increase in the number of virtual machines supported by an upgraded system.


With SDN in the data center, data center managers can facilitate the ramp up of new virtual machines (VMs) automatically as user needs grow. In the case illustrated in this paper, it was the ability to automatically spin up a VM and a new instance of Microsoft Exchange to support new email users. With all of this automation, the last thing that’s needed is for the infrastructure to restrict that flexibility.


In this example, a Dell PowerEdge R720 server replaced an older Dell PowerEdge R710 server-storage solution. These new systems featured the latest Intel® Xeon® processor, new operating system, SSD storage and Intel® Ethernet CNA X520 (10GbE) adapters. When the tests were finished, the new system supported 4.5 times more VMs than the previous system.


What is interesting to me is that the researchers measured the performance increase for each part of the upgrade—which really illustrates the point that these upgrades need to done comprehensively.


In this test, when the researchers upgraded just the CPU and the OS, they saw performance increase 275 percent. Not bad. But when they added the higher-performance SSDs to the new CPU and OS that resulted in a 325 percent improvement. And finally, when they added the new network adapters, overall VM density improvement climbed 450 percent compared to the original base system.


More details on both of these examples are available in the white papers referenced below.


When it’s time to invest in new servers, take a look at the rest of your system, which includes your Ethernet and storage sub-system, and think about the combination that will give you the best return on your investment.


*Boosting Your Storage Server Performance with the Intel Xeon Processor E5-2600 V3 Product Family


**Increase Density and Performance with Upgrades from Intel and Dell

Read more >

California to Establish a Cybersecurity Integration Center

CA Exec Order B-34-15.jpgGovernor Brown of California signed an executive order (Order B-34-15) establishing a California Cybersecurity Integration Center (Cal-CSIC) to align and improve the posture and resilience of the state’s cybersecurity strategy.  The CSIS will coordinate across state agencies and include federal government partners.  It will create a Cyber Incident Response Team and secure mechanisms to properly share appropriate information.


California is a massive state, with a huge economy, and heavily dependent on technology.  Having a centralized capability to align and integrate resources is a fantastic concept.  I applaud all the work which had to occur to get this legislation to this point.  But the question remains, will the Cal-CSIC be a bureaucratic paper-tiger or will it have the necessary leadership, skills, and resources to forge a meaningful role in aligning a large and diverse team to prioritize and manage the state’s cyber risks?


As a Californian and a cybersecurity professional, I truly hope this organization can become the beacon which forges effective alliances of the security teams across the state.  Currently, separate organizations are working independently, without the benefit of strong coordination, to manage their cyber risks.  The challenges are immense and put the state at considerable risk.  Citizens of the state have high expectations.  California, a longtime bastion of technology innovation, has been a leader in securing citizen’s privacy, life-safety practices, and environment protection.  Cybersecurity overlays and binds all these aspects and can contribute to the health, prosperity, and safety of every Californian. 


This team will need very strong leadership to get all these groups to work together effectively.  Otherwise it will become a detriment by adding unnecessary bureaucracy, without tangible benefits, to those groups trying to do the job independently.  If California is able to get this right, it will be a huge win.  If they get it wrong, it actually adds to the problems and hobbles all the current efforts underway. 


Governor Brown, move carefully, but with purpose.  I urge you to forego political appointments or service based promotions, instead get the right functional experts in place and make this a reality to protect California.   Find leaders with expert cybersecurity strategic insights, superb communication abilities, and practical industry experience necessary to earn the peer respect of the cross-functional team and the private sector partners.  This will be a very tough job with ambitious goals, but if done properly, it has the potential to set California apart and showcase the state’s innovation and effectiveness in cybersecurity operations and internal governance as a standard for the nation and world.




Twitter: @Matt_Rosenquist

Intel Network: All My Previous Blog Posts


Read more >

The Excitement is Building for Intel® Omni-Path Architecture

By Barry Davis, General Manager, High Performance Fabrics Operation at Intel



Intel Omni-Path Architecture (Intel OPA) is gearing up to be released in Q4’15 which is just around the corner! As we get closer to our official release, things are getting real and we’re providing more insight into the fabric for our customers and partners. In fact, more Intel Omni-Path architectural level details were just presented on August 26th at Hot Interconnects. Before I talk about the presentation, I want to remind you that this summer at ISC ’15 in Germany, we disclosed the next level of detail and showcased the first Intel OPA public demo through the COSMOS supercomputer simulation.


For those who didn’t make it to Frankfurt, we talked about our evolutionary approach to building the next-generation fabric. We shared how we built upon key elements of Aries* interconnect and Intel® True Scale fabric technology while adding revolutionary features such as:


  • Traffic Flow Optimization: provides very fine grained control of traffic flow and patterns by making priority decisions so important data, like latency sensitive MPI data, has an express path through the fabric and doesn’t get blocked by low priority traffic. This results in improved performance for high priority jobs and run-to-run consistencies are improved.


  • Packet Integrity Protection: catches and corrects all single/multi-bit errors in the fabric without adding any additional latency like other error detection & correction technologies. Error detection/corrections is extremely important in fabrics running at the speed and scale of Intel OPA.


  • Dynamic Lane Scaling: guarantees that a workload will gracefully continue to completion even if one or more lanes of a 4x link fail, rather than shutting down the entire link which was the case with other high performance fabric.


These features are a significant advancement because together they help deliver enhanced performance and scalability through higher MPI rates, lower latency and higher bandwidth.  They also provide for improved Quality of Service (QoS), resiliency and reliability.  In total, these feature are designed to support the next generation of data centers with unparalleled price/performance and capability.


At Hot Interconnects we provided even more detail. Our Chief OPA software architect, Todd Rimmer, gave an in-depth presentation on the architectural details of our forthcoming fabric. He delivered more insight into what makes Intel OPA a significant advancement in high performance fabric technology.  He covered the major wire-level protocol changes responsible for the features listed above – specifically the layer between Layer 1 and Layer 2 coined as “Layer 1.5.”  This layer provides the Quality of Service (QoS) and fabric reliability features that will help deliver the performance, resiliency, and scale required for our next-generation HPC deployments. Todd closed by keeping to his software roots by discussing how Intel is upping the ante on the software side with a discussion on Intel OPA software improvements, including the next-generation MPI optimized fabric communication library –  Performance Scaled Messaging 2 (PSM2) and powerful new features for fabric management.


Check out the paper Todd presented for a deep dive into the details!


Stay tuned for more updates as the Intel® Omni-Path Architecture continues the run-up towards release in the 4th quarter of this year.


Take it easy

Read more >

Revisiting SaaS Security Controls at Intel

SNACKABLE-SecurityControls.pngSaaS is not new. It has been used for both business and personal use for some time and for a few years in its cloud form. So what sort of security changes are required to use SaaS in the enterprise? What SaaS challenges is Intel IT encountering? Why now? In this blog I share some real-life SaaS experiences, as a cloud and mobile security engineer at Intel, as well as my view of SaaS security.


Matching Strategy to the Current Environment

The emergence of new and large use cases triggered Intel IT to reignite our SaaS architecture and look for more security solutions for the SaaS cloud space. Previously at Intel, use cases for cloud-based SaaS were small and limited to a few users. But the new use cases involved thousands of users, mainstream apps such as data repositories and collaboration, and big business models such as CRM. These large use cases required us to reexamine our SaaS strategy, architecture, and controls to protect those mass deployments. As documented in our recent white paper, these controls center mainly on data protection, authentication and access control, and logs and alerts. We strive to enforce these controls without negatively impacting the user experience and the time to market of SaaS solutions. The paper also discusses how we manage shadow IT—users accessing SaaS services without IT awareness.


How We Handle Cloud Traffic Inspection

While the white paper summarizes our SaaS security controls, I’d like to delve a bit deeper into cloud inspection.


As is often the case, the right approach wasn’t immediately apparent. We needed to examine the advantages and disadvantages of the various choices – sometimes a complicated process. We investigated two ways we could inspect activity and data:

  • Cloud proxy. In this approach, we would pass all the traffic through a cloud proxy, which inspects the traffic and can also encrypt specific fields, a valuable process in controlling the traffic and information being passed to the cloud provider. The downside of this solution is that the traffic is directed through the cloud proxy, which might cause performance issues in massive cloud implementations, where the cloud provider has many presence points around the globe. Cloud proxies can also impact the application modules, in cases where reverse proxy is being used.
  • Cloud provider APIs. This option uses the cloud provider’s APIs, an approach that allows inspection of user activity, data, and various attributes. The benefit of such an implementation is that it happens behind the scenes and doesn’t impact the user experience (because it is a “system-to-system” connection). But the downside of using APIs is that not all cloud providers offer the same set of APIs. Also, the use cases between SaaS providers can differ—requiring more time to fine-tune each implementation.

We reached the conclusion that each solution needs to match the specific use case security requirements. Some SaaS implementations’ security requires more control, some less. Therefore we believe it is important to have a toolset where you can mix and match the needed security controls. And yes, you need to test it!



I’d like to hear from other IT professionals. How are you handling SaaS deployments? What controls have you implemented? What best-known methods have you developed, and what are some remaining pain points? I’d be happy to answer your questions and pass along our own SaaS security best practices. Please share your thoughts and insights with me – and your other IT colleagues on the IT Peer Network – by leaving a comment below. Join the conversation!

Read more >

Access Your Data Anytime, Anywhere With Intel ReadyMode Technology

Back in 1993, when the first 7200-RPM hard drives hit the market, I imagine people thought they could never fill up its jaw-dropping 2.1GB capacity. Of course, that was before the era of MP3s and digital photos, and any videos you had were on VHS or Beta (or possibly laserdisc).


Today, desktop PCs like the ASUS K20CE* mini PC come with up to 3TB SSD drive to accommodate users’ massive collections of HD videos, photos, eBooks, recorded TV, and other huge files. That’s terabytes! Some feature even more storage.


But how do you access these files away from home? You could use one of the many cloud services on the market; however, if you have lots of personal photos and videos, or large documents and files from work, you’ll quickly reach the cap on the free capacity and have to start paying monthly subscription fees. Plus, you’d need to remember to upload files that you might want to access later to the cloud, and if you want to change services, move your files digitally from one network to another, which can be a hassle, not to mention a security concern.


Access your data anytime, anywhere


A better option would be to take advantage of Intel ReadyMode Technology (Intel RMT) and third-party remote access software such as Splashtop, Teamviewer, or Microsoft Remote Desktop to turn your desktop PC into an always-available “personal cloud” that lets you access all of your files on your other devices, such as your smartphone or tablet.


“With RMT, your data is stored safely in your home computer so you don’t have to worry about people hacking into it. You can access it through remote log on or through VPN,” said Fred Huang, Product Manager, ASUS Desktop Division. “It’s a better way to access your personal files that exists today with ASUS systems running Intel RMT.”


Intel RMT replaces the traditional PC sleep state with a quiet, low power, OS-active state that allows PCs to remain connected, up-to-date, and instantly available when not in use. Plus, it allows background applications—like remote access software—to run with the display off while consuming a fraction of electricity it normally would when fully powered on.

For home PCs, this means you can get the convenience of anytime, cloud-like access to your files without a cloud-service bill, as well as the ability to share it outside your own personal login.Family with AIO.jpg

“Cloud-based storage is usually more personal, so you might have a different account from your spouse or family member, but with a home hub PC, it can be one shared account that the whole family can access,” adds Huang.


For businesses, Intel RMT allows employees to use remote access to get to their work files from anywhere without the need for their desktops to remain fully awake and consuming power. Across a large enterprise, that kind of power savings really adds up.


Another business benefit: desktops with Intel RMT enable automatic backups and nightly system health checks to happen efficiently during off hours without waking the machines—saving power while protecting files and uptime.


The perfect home (and work) desktop


ASUS desktop PC allow users to do everything from daily tasks to playing 4K ultra HD video with enhanced energy efficiency, better productivity and powerful performance across all its form factors. Other highlights include instant logins, voice activation, and instant sync and notifications.


And don’t forget about the gamers. RMT can help support game downloads and streaming sessions without wasting a lot of energy. Gamers can also choose to run updates and applications in the background 24/7, or overnight, and save time and energy by being connected to an energy-efficient smart home hub. Take a look at this recap video of the always available PC from IDF 2015 last month.



In addition to the ASUS K20 mentioned above, Intel RMT will also be featured by the future series or succeeding models for ASUS M32AD* tower PC, ASUS Zen AiO Z240IC* All-in-One, and the ASUS E510* mini PC.


Want to find out more about what Intel Ready Mode can do? Visit:

Read more >

Signed Malware Continues to Undermine Trust

The practice of using maliciously signed binaries continues to grow.  Digitally signing malware with legitimate credentials is an easy way to make victims believe what they are downloading, seeing, and installing is safe.  That is exactly what the malware writers want you to believe.  But it is not true.

2015 Q3 Total Malicious Signed Binaries.jpg

Through the use of stolen or counterfeit signing credentials, attackers can make their code appear trustworthy.  This tactic works very well and is becoming ever more popular as a mechanism to bypass typical security controls. 


The latest numbers from the Intel Security Group’s August 2015 McAfee Labs Threat Report reveals a steady climb in the total number of maliciously signed binaries spotted in use on the Internet.  It shows a disturbingly healthy growth rate with total numbers approaching 20 million unique samples detected.


Although it takes extra effort to sign malware, it is worth it for the attackers.  No longer an exclusive tactic of state-sponsored offensive cyber campaigns, it is now being used by cyber-criminals and professional malware writers, and is becoming a widespread problem.  Signing allows malware to slip past network filters and security controls, and can be used in phishing campaigns.  This is a highly effective trust-based attack, leveraging the very security structures initially developed to reinforce confidence when accessing online content.  Signing code began as a way to thwart hackers from secretly injecting Trojans into applications and other malware masquerading as legitimate software.  The same practice is in place for verifying content and authors of messages, such as emails.  Hackers have found a way to twist this technology around for their benefit.  


The industry has known of the emerging problem for some time.  New tools and practices are being developed and employed.  Detective and corrective controls are being integrated into host, data center, and network based defenses.  But adoption is slow which affords a huge opportunity for attackers. 


The demand for stolen certificates is rising.  Driven by the increasing usage and partly by an erosion effect of better security tools and practices, which work to reduce the window of time any misused signature remains valuable.  Malware writers want a steady stream of fresh and highly trusted credential to exploit.  Hackers who breach networks are harvesting these valuable assets and we are now seeing new malware possess the features to steal credentials of their victims.  A new variant of the hugely notorious Zeus malware family, “Sphinx”, is designed to allow cybercriminals to steal digital certificates.  The attacker community is quickly adapting to fulfill market needs.  


Maliciously signed malware is a significant and largely underestimated problem which undermines the structures of trust which computer and transaction systems rely upon.  Signed binaries are much more dangerous than the garden variety of malware.  Until effective and pervasive security measures are in place, this problem will grow in size and severity.


Twitter: @Matt_Rosenquist


Read more >

Empowering the Next Wave of Innovation at the Intel IoT Ignition Lab, Israel

I feel very fortunate to be a part of the hugely exciting culture of innovation that is making its mark in Israel at the moment. The country has a reputation as fertile ground for start-up companies to flourish, but it’s also seeing a rapid pace of technological innovation. I recently returned to Israel after living abroad for a number of years, and the sheer scale of new development is amazing – even more so when you consider our relatively small population. Office blocks and research labs are shooting up, more and more high-end, high-value products are being manufactured, and investments and M&A activity are huge. I spoke to Guy Bar-Ner, regional sales director for Intel Israel, who told me that being part of the Intel Sales and Marketing team based in Israel means he has lots of opportunities to get involved with some of the most exciting developments and play a role in helping drive the industry forward.


To put this growth into perspective: there are currently 74 Israeli companies listed on Nasdaq, one of the largest representations for a non US country. The national economy is strong and the high-tech industry is doing well. It’s a great time to be in business here.

Igniting Innovation

Guy said: “Being part of the Intel Sales and Marketing team based in Israel means I have lots of opportunities to get involved with some of the most exciting developments and play a role in helping drive the industry forward.


cr0001.JPGWith a large (10,000-strong) presence, Intel Israel is in a strong position to help make a difference. We consolidated this position recently when we opened our IoT Ignition Lab in Tel Aviv. Our vision for the Lab is to provide local companies with the resources, space and tools they need to get their Internet of Things (IoT) ideas off the ground. This is the first time we’ve been able to offer such dedicated support to companies both large and small in the country, and after just two months of operation, it’s already showing promising results.


We offer companies that are innovating in the IoT space the opportunity to work with Intel’s technical experts to identify opportunities to develop their solutions on Intel® architecture, and then provide them with the resources to build or enhance their solutions, and a platform on which to showcase them to prospective customers through the Lab’s demo center.


The Lab focuses on four key pillars – Smart Cities, Smart Transportation, Smart Agriculture and Smart Home – but provides support and resources for any kind of IoT project that qualifies. At the moment, we’re working on a couple of exciting projects, including a Smart Cities solution from IPgallery, a Smart Transportation/Supply Chain solution from CartaSense and a personalized music solution from Sevenpop.


In addition to our work with local IoT companies, we’re using the IoT Ignition Labs to support Israel’s strong (and growing) maker/developer community. We have about 500 of these visionary folks just among the Intel Israel employees. They take part in many maker/developer hackathons and meet-up events during the year.  The size of the overall Israel maker/developer community is amazing, holding up to ten meet-ups on various technology-related topics per week in the greater Tel Aviv area alone. The ideas that this community comes up with are fantastic – in fact it was a team from Israel that won first place in the Intel® Edison Make It Pro Challenge last year.


We’re keen to support these innovators by offering access to Intel resources and products to help them build the must-have solutions of tomorrow. We’ve been running hackathons to give them a forum in which to work together and come up with new ideas, and the winners of the hackathons are then welcomed into the Ignition Lab to work alongside the Intel experts to develop their idea into a marketable solution. In addition, the Intel Ingenuity Partner Program (IIPP) is a new program that is now up and running working a select few start-ups to help them build and market their Intel architecture-based solutions. The combination of the IIPP and the Intel IoT Ignition Lab is a fantastic way for start-ups to develop new and exciting solutions.



Engaging with the IoT Community


Meanwhile, we’re also taking the opportunity to drive further collaboration with the local community of start-ups and innovators at the upcoming DLD Innovation Festival, which is taking place in Tel Aviv in early September. For the first time, Intel will be taking part directly in this event, and we’ll be hosting a number of events and activities at the Intel Innovation building near the main entrance on September 8th and 9th – including


  • Speakers with new perspectives: Intel experts in areas such as IoT, wearables, video, media and connectivity will share their thoughts on a range of technology topics beyond Intel’s traditional business.
  • Express Connect: We’ll be offering a match-up service for conference attendees to meet with Intel leaders and topic experts by appointment for more tailored, in-depth discussions.
  • Showcase area: Some of the new and exciting Intel® technologies such as Intel® RealSense™ technology, Intel’s Wireless Connectivity, smart home and advanced analytics solutions will be on display as part of an ‘airport terminal of the future’ area.
  • Live hackathon: Members of Intel’s own developer community will run an IoT-themed hackathon event using Intel Edison to find the next IoT Ignition Labs project. This will be run in collaboration with the Open Interconnect Consortium (OIC) and will highlight how the OIC and Intel are collaborating to create a smarter world.


I invite everyone to come to the DLD event to experience Intel’s technology in action and engage with the people at Intel who are creating the future.”


To continue the conversation on Twitter, please follow us at @IntelIoT 

Read more >

Enabling Community and Patient-Centred Care Using Predictive Analytics

There’s a lot of talk about Big Data in healthcare right now but for me the value of Big Data is not in the size of the data at all, the real value is in the analytics and what that can deliver to the patient. Healthcare reform is underpinned by a shift to value-based care where identifying best care, best treatment and best prognosis are all driven by business intelligence and business analytics.


I want to share my thoughts on this in a little more detail from a presentation I gave at the NHS England Health and Care Innovation Expo in Manchester, where Intel and Oracle highlighted some of the great work happening around identifying healthcare needs using predictive analytics.


Opportunities for Data Use in Healthcare are Rich

Everywhere I look in healthcare there seems to be an abundance of data, for example, it’s estimated that the average hospital generates 665TB of data annually. But it’s not just the volume of data that presents challenges, the variety of data means that the opportunities for its use are rich but often tempered by some 80 percent of that data being unstructured. Think X-rays, CT and 3D MRI scans as just one area where technology has vastly improved the quality of delivery of these services – but with a consequential exponential growth in resulting data.


Does more data really bring better care though? I’d argue that it’s the analysis of data that holds the key to solving some of the big challenges faced by providers across the world rather than how much data can be captured or accessed. With that in mind Intel and Oracle are working to help providers integrate, store and analyse data in better ways to deliver improved patient outcomes, including:


  • Enabling early intervention and prevention
  • Providing care designed for the individual
  • Enhancing access to the care for the underserved


Our approach to developing solutions in this area encompasses several areas on the Big Data stack. There’s the core technology which covers the CPU’s, SSD, Flash, Fabrics, Networking and Security. And then there’s the investment in the Big Data platform which talks to the proliferation of Hadoop by making it easier to deploy. Finally, but no less important, are the analytics tools and utilities which help broaden analysis and accelerate application development.


Oracle and Project O-sarean Empowers Citizens

I’d like to highlight a couple of great examples where data sharing is helping to deliver active patient management. Oracle has played a part in the successful Project O-sarean in the Basque Country where the regional public healthcare system covers some 2.1m inhabitants with 80 percent of patient interactions related to chronic diseases. It has been predicted that by 2020 healthcare expenditure would need to double if systems and processes did not change. The results of this new multi-channel health service, powered by voluminous amounts of data, are impressive and include:


  • Empowered citizens with access to Personal Health records
  • Active patient monitoring for those with chronic diseases
  • Health and drug advisory service providing evidence-based advice


The clinician benefits too as 11 acute hospitals, 4 chronic hospitals, 4 mental health hospitals, 1,850 GPs and 820 pharmacies are connected using Oracle solutions to collaborate through the sharing and analysis of patient data. This is a fantastic example of interoperability in healthcare. (Download a PDF from Oracle for more information on the Project O-sarean).


Intel helps Partners Deliver Predictive Analytics Innovations

Here at Intel we’ve been working with MimoCare to improve support for independent living with the Intel® Intelligent Gateway™. Through the use of sensors MimoCare technology will help the elderly remain safe living independently in their homes for longer. The use of analytics to identify normal patterns of behavior and predict events means that trigger alerts can be set at the family, friends and carers while the consolidation of aggregated data can help wider clinical research too. Read more on the great work of MimoCare and Intel’s role in the Internet of Things in Healthcare here.


I think you’ll find a recent blog by my colleague, Malcolm Linington, interesting too – he takes a look at how GPC are innovating to help guide wound care specialists to deliver the most effective treatment plan possible, develop standardized assessment practices, enhances clinical-based decision-making and ultimately provides cost-savings by streamlining wound care procedures.


I’m excited to share these stories with you as I feel we are only at the start of what is going to be a fantastic journey of using predictive analytics in healthcare. It would be great to hear about some of your examples so please do tweet us via @intelhealth or register and post a comment below.


Find Claire Medd RGN BSc (Hons) on LinkedIn.


Read More:


Read more >

10 Mobile BI Strategy Questions: System Integration


More and more mobile devices are becoming connected with the software that runs on them. But the true value of mobility can’t be realized until these devices take advantage of the necessary integration among the underlying systems.


The same principles hold true for mobile business intelligence (BI). Therefore, when you’re developing a mobile BI strategy, you need to capitalize on opportunities for system integration that can enhance your end product. Typically, system integration in mobile BI can be categorized into three options.



Option One: Standard Mobile Features Expand Capabilities


Depending on the type of solution (built in-house or purchased), features are considered standard because they use existing and known capabilities on mobile devices such as e-mailing, sharing a link, or capturing a device screenshot. They provide methods of sharing mobile BI content, including collaboration without a lot of investment by development teams.


A typical example is the ability to share the report output with other users via e-mail by a simple tap of a button located on the report. This simple yet extremely powerful option allows immediate execution of actionable insight. Additional capabilities, such as annotating or sharing a specific section(s) of a report, add precision and focus to the message that’s being delivered or content shared. In custom-designed mobile BI solutions, the sharing via e-mail option can be further programmed to attach a copy of the report to an e-mail template, thereby eliminating the need for the user to compose the e-mail message from scratch.


Taking advantage of dialing phone numbers or posting content to internal or external collaboration sites is another example. An account executive (AE) could run a mobile BI report that lists the top 10 customers, including their phone numbers. Then, when the AE taps on the phone number, the mobile device will automatically call the number.



Option Two: Basic Integration with Other Systems Improves Productivity


A basic integration example is the ability to launch another mobile application from a mobile BI report. Unlike in Option One, this step requires the mobile BI report to pass the required input parameters to the target application. Looking at the same example of a top 10 customers report, the AE may need to review additional detail before making the phone call to the customer. The mobile BI report can be designed so that the customer account name is listed as a hotlink. When the AE taps the customer name, the CRM application is launched automatically and the account number is passed on, as well as the AE’s user credentials.


This type of integration can be considered basic because it provides automation for steps that the user could have otherwise performed manually: run the mobile BI report, copy or write down the customer account number, open the CRM app., log in to the system, and search for the account number. All of these are manual steps that can be considered “productivity leaks.” However, this type of integration differs from that described in Options One because there will be a handshake between the two systems that talk to each other. When using standard features, the report is attached to the e-mail message without any additional logic to check for anything else—hence, no handshake required.



Option Three: Advanced Integration with Other Systems Offers Maximum Value


Of the three options, this is the most complicated one because it requires a “true” integration of the systems involved. This category includes those cases where the handshake among the systems involved (it could be more than two) may require execution of additional logic or tasks that the end user may not be able to perform manually (unlike those mentioned in Option Two).


Taking it a step further, the integration may require write-back capabilities and/or what-if scenarios that may be linked to specific business processes. For example, a sales manager may run a sales forecast report and have the capability of manually overwriting one of the forecast measures. This action would then trigger multiple updates to reflect the change, not only on the mobile BI report but also on the source system. To make things more interesting, the update may need to be real time, a requirement that will further complicate the design and implementation of the mobile BI solution.



Bottom Line: System Integration Improves the Overall Value


No matter what opportunities for system integration exist, you must find a way to capitalize on them without, of course, jeopardizing your deliverables. You need to weigh the benefits and costs for these opportunities against your scope, timeline, and budget. If mobile BI is going to provide a framework for faster, better-informed decision making that will drive growth and profitability, system integration can become another tool in your arsenal.


Think about it. Besides – how can we achieve productivity gains if we’re asking our users to do the heavy lifting for tasks that could be automated through system integration?


Where do you see the biggest opportunity for system integration in your mobile BI strategy?


Stay tuned for my next blog in the Mobile BI Strategy series.


Connect with me on Twitter (@KaanTurnali) and LinkedIn.


This story originally appeared on the SAP Analytics Blog.

Read more >