Recent Blog Posts

HIMSS 16 Focus: Impact of Technology on Patients

It’s seems like just yesterday that we were leaving Chicago and basking in the innovation on display at HIMSS 15. Actually, since the last show was in April and now the biggest event in healthcare technology is back in its usual calendar slot, we’re ready for the second HIMSS in less than 12 months.

 

This year, the healthcare technology community is headed to Las Vegas February 29 to March 3, 2016, to see what innovation will be on the healthcare horizon in 2016 and beyond.  You should expect to see more conversations around how the patient, and their user generated data, plays into healthcare going forward. HIMMS_1.jpg

 

At Intel, we’re approaching HIMSS 16 with a critical eye on three areas that we feel are focal points for CMIOs: precision medicine, health IT and medical devices, and consumer health. All are patient-focused.

 

To learn more about these pillars, you’re invited to the Intel booth (#3032) to view the latest technology platforms that focus on the rise of patient engagement and consumer generated health data. We encourage you to stop by and take a guided tour, where you’ll see these demonstrations:

 

  • Precision Medicine: From genome sequencing to targeted treatment plan all in one day
  • Health IT and Medical Devices: Securely connecting patients, clinicians and their data for proactive healthcare wherever you are
  • Consumer Health: Engaging connections among people, their data and care community to empower health ownership

 

Outside of the Intel booth, you will find our technology in a number of HIMSS Kiosks that showcase real solutions available today:

 

  • Population Health Zone Kiosk #14099: IOT Big Cloud Analytics will share how analytics are helping to proactively improve healthcare
  • Connected Health Zone Kiosk #15208: Fujitsu will showcase a communication tool integrated EMR used at the National Cancer Center, personal health record for pregnant women and personal health records for dental solutions
  • Intel Security Kiosk and Cyber Security Challenge #9908: Come take the Security Breach Challenge and learn how to combat cybercrime through efficient breach detection and response

 

Finally, be sure to follow @IntelHealth on Twitter to keep up-to-date on all the happenings going on at the event. We’ll be live tweeting from the show floor and sharing pictures of new health IT products/services that we discover. We’ll also be giving away a Basis Peak watch every day during HIMSS through a Twitter contest so be on the lookout for how you can win.

 

HIMSS is always a great event and we are looking forward to seeing you in Las Vegas.

 

What are you most looking forward to seeing at HIMSS16? Tweet us @IntelHealth.

Read more >

Meet the Outward Sympathizer – An Often-Overlooked Type of Insider Threat Agent

BLOG-OutwardSympathizer.pngBased on reports in recent news, some forms of insider threat get a lot of attention. Just about everyone has heard of examples of damage caused by a disgruntled employee, workplace violence, or theft of intellectual property. But insider threat is actually much larger than those common examples.  At Intel, we’ve been studying this situation and have documented our findings in a white paper we call the Insider Threat Field Guide. In this field guide, we discuss 13 distinct insider threat agent types and the insider events they are most likely to cause, providing a comprehensive approach to identifying the most likely insider threat vectors.  We are sharing this guide so other companies can improve their security stance too. 

 

For example, one threat agent type we identified is the “outward sympathizer.” Our identification of this character is unique in the industry—we were unable to find any published analysis of this type of insider threat. We define an outward sympathizer as a person who knowingly misuses the enterprise’s systems to attack others in support of a cause external to the enterprise.

 

As we developed the field guide, we characterized the outward sympathizer threat as follows:

  • An insider of any status that acts in a manner harmful to the enterprise when reacting to external triggering events.
  • Harm may occur incidentally (nonhostile) or intentionally (hostile) and may take any form, including violence.
  • Actions are most likely reactive and emotional, episodic rather than ongoing.
  • Triggering events can be of any scale, from personal to worldwide, and related to any cause.
  • Collusion is more likely to occur if the triggering event has wide applicability within the worker population, such as a regional conflict.
  • The probability of attack is directly proportional to the impact and intensity of the triggering event, and inversely proportional to the general morale and the security awareness of the employee population.

 

The outward sympathizer is a complex threat agent and triggering events can vary widely. Perhaps there is conflict in a country in which family resides, or an environmental issue that the insider feels strongly about. It can be difficult to predict what will trigger an outward sympathizer attack because the reason for the attack may be entirely unique to the sympathizer and not obvious to others.

 

Outward sympathizer activity can occur at three escalating levels. Even the most benign level could potentially have devastating consequences for the enterprise.

  • Level 1 – Insider misuses company resources (nonhostile). In this scenario, the insider inappropriately uses company resources to independently support a cause, but the company itself is not attacked. For example, the insider is upset about something so he or she downloads hacker tools onto company servers and uses them to attack someone else. There is no intent to harm the enterprise; in fact the insider probably hopes the company never finds out about it and may assume that his or her identity is protected by firewalls from outside detection. In any case, the attacked entity may believe the enterprise itself is attacking them, and may retaliate in many ways.
  • Level 2 – Insider inappropriately discloses company information to directly support an external cause. The information may be posted publicly to embarrass the company, or may be directed to an activist organization to support their intelligence gathering. The actor may be a planted agent.
  • Level 3 – Insider directly attacks the company from the inside or enables an attack from the outside. The attack can take any form, including data theft, destruction of hardware or facilities, or internal violence or sabotage. The actor is most likely a disgruntled insider but may be a planted agent. Note that at this level, the line blurs between outward sympathizer and disgruntled insider. The important difference is that outward sympathizers are not triggered to action by something that happened to them personally but instead are upset about something external to the enterprise.

 

Enterprises should include outward sympathizers in their own insider threat models and plan for mitigation. Because this type of threat agent presents differently than most other characters, particularly at the benign level, it can be hard to detect—in fact, some of their methods may not be traceable back to the individual. The unique aspects of the outward sympathizer are motivation and timing, so the most effective mitigations will target those.

15-0701 Snackable Graphics_new-01.png

Research by CERT and others suggests that strong tone-from-the-top security messaging is an effective behavioral deterrent, especially for non-professional threat actors. In addition, we use the following techniques to help minimize the likelihood of outward sympathizer events:

    • Providing specific examples during annual security training
    • Training managers to detect and appropriately handle warning signs
    • In conflict regions, ensuring managers and HR communicate quickly and regularly about personal safety and any available corporate support

    The technical methods used by outward sympathizers are not unique (as a class) and follow classic attack patterns. Technical controls are environmental, not specific. In particular, although it is common to monitor networks for incoming attacks, it is less common to monitor for outgoing attacks. Other effective technical controls include the following:

    • Limiting access to least-privilege
    • Checking the internal environment for hacking tools such as Low Orbit Ion Cannon (LOIC)
    • Watching for misuse such as outgoing distributed denial-of-service (DDOS) attacks

     

    Intel IT’s Insider Threat Field Guideincluding our understanding of the outward sympathizer threat agent—is an innovative way of looking at the full scope of insider threats. I believe other security professionals can use the field guide to identify and prioritize insider threats, communicate the risk of these threats, and optimize the use of information security resources to develop an effective defense strategy. I encourage you to share your feedback on the field guide by leaving a comment below. In addition, if you are looking for more information about our other security solutions, check out the 2015-2016 Intel IT Annual Performance Report.  We hope you will join the conversation!

    Read more >

    Why Purpose Matters In Mobile Analytics Design

    office-workers-tablet-collaboration-1.jpgA rise in the use of mobile devices and applications has heightened the demand for organizations to elevate their plans to deliver mobile analytics solutions. However, designing mobile analytics solutions without understanding your audience and purpose can sometimes backfire.

     

    I frequently discover that in mobile analytics projects, understanding the purpose is where we take things for granted and fall short—not because we don’t have the right resources to understand it better, but because we tend to form the wrong assumptions. Better understanding of the “mobile purpose” is critical for success and we need to go beyond just accepting the initial request at the onset of our engagements.

     

    The Merriam-Webster dictionary defines the purpose as “the reason why something is done or used: the aim or intention of something.” Although the reasons for a mobile analytics projects may appear obvious on the surface, a re-evaluation of the initial assumptions can often prove to be invaluable both for the design and longevity of mobile projects.

     

    Here are a few points to keep in mind before you schedule your first meeting or lay down a single line of code.

     

    Confirm link to strategy

     

    I often talk about the importance of executive sponsorship. There’s no better person than the executive sponsor to provide guidance and validation. When it comes to technology projects (and mobile analytics is no different), our engagements need to be linked directly to our strategy. We must make sure that everything we do contributes to our overall business goal.

     

    Consider the relevance

     

    Is it relevant? It’s a simple question, yet we have a tendency to take it for granted and overlook its significance. It doesn’t matter whether we’re designing a strategy for mobile analytics or a simple mobile report—relevance matters.

     

    Moreover, it isn’t enough just to study its current application. We need to ask: Will it be relevant by the time we deliver? Even with rapid deployment solutions and the use of agile project methodologies, there’s a risk that certain requirements may become irrelevant if current business processes that mobile analytics depends on change or your mobile analytics solution highlights gaps that may require a redesign of your business processes. In the end, what we do must be relevant both now and when we Go Live.

     

    Understand the context

     

    Understanding the context is crucial, because everything we do and design will be interpreted according to the context in which the mobile analytics project is managed or the mobile solutions are delivered. When we talk about context in mobile analytics, we mustn’t think only about the data consumed on the mobile device, but also how that data is consumed and why it was required in the first place.

     

    We’re also interested in going beyond the what to further examine the why and how. Why is this data or report relevant? How can I make it more relevant?

    Finding these answers requires that you get closer to current or potential customers (mobile users) by involving them actively in the process from day one. You need to closely observe their mobile interactions so you can validate your assumptions about the use cases and effectively identify gaps where they may exist.

     

    Bottom line: Focus on the business value

     

    Ultimately, it all boils down to this: What is the business value?

     

    Is it insight into operations so we can improve productivity? Is it cost savings through early detection and preventive actions? Is it increased sales as a result of identifying new opportunities?

     

    What we design and how we design will directly guide and influence many of these outcomes. If we have confirmed the link to strategy, considered the relevance, and understood the context, then we have all the right ingredients to effectively deliver business value.

     

    In the absence of these pieces, our value proposition won’t pass muster.

     

    Stay tuned for my next blog in the Mobile Analytics Design series.

     

    You may also like the Mobile BI Strategy series on IT Peer Network.

     

    Connect with me on Twitter @KaanTurnali, LinkedIn and here on the IT Peer Network.

     

      A version of this post was originally published on turnali.com and also appeared on the SAP Analytics Blog

    Read more >

    Empowering Health: Are Avatars, Virtual Reality and Robots the future of Nursing?

    Empowering_Health_Event_Intel_Microsoft_Nurses_Main_web.jpg

    I was delighted to be invited to speak at Microsoft’s Empowering Health event in Brussels, Belgium recently, which brought together some 200 thought-leaders from across the world to discuss health IT issues in a ‘Mobile First and Cloud First World’.

     

    I was looking forward to hearing about how some of the more progressive countries in Europe were utilising technology to deliver more personal, productive and predictive health to its citizens so it was pleasing to hear examples from the Netherlands around patient portals and from Sweden where virtual care rooms are helping to deliver a more efficient healthcare system through patient self-diagnosis. From these very real examples of today to discussions around the future of machine learning and robotics, the narratives were underpinned by the absolute need for clinical staff to have input into the technology solution they would be asked to use as early as possible.

     

    Data: One Size Does Not Fit All

    Some great statistics from Tom Lawry, Director of Worldwide Health Analytics, Microsoft, generated a real buzz in the room. Tom started his presentation by stating that ‘we spend a lot of money ONCE people are sick, while most money is spent on small numbers of people who are VERY sick.’ Clearly there are a lot of areas where technology is helping to move the needle from cure to prevention while all-in-one-day genome sequencing to personalised medicine is something we are working towards here at Intel as we look ahead to 2020. I was interested to hear examples from across the world on how healthcare providers are dealing with increasingly large amounts of data. Within the European Union there are very different takes on what data is classed as secure and what is not. For providers and vendors, this requires a keen eye on the latest legislation, but it’s clear that it’s a case of one size does not necessarily fit all.

     

    Digital Education of Nurses

    The breakout nursing session brought together a dedicated group of nurses with a real interest in how technology can, and will, help make nursing even better. We kicked off by discussing what level of digital education nurses have today, and what they need to equip them for the future. The consensus was that more needs to be done in helping nurses be prepared for the technology they’ll be asked to use, in essence making technology a core part of the nursing curriculum from day one.

    Empowering_Health_Event_Intel_Microsoft_Nurses_web.jpg

    The move towards distributed care generated some fantastic thoughts on how technology can help nurses working in the community – read my recent blog for more thoughts on that. We all agreed that access to healthcare is changing, it has to if we are to meet the demands of an ageing population. For example, millennials don’t necessarily think that they need to see a medical practitioner in a hospital setting or a doctor’s surgery, they are happy to call a clinician on the phone or sit in a kiosk for a virtual consultation, the priority being quick and easy access.

     

    Nurses Actively Championing Technology

    I was particularly impressed by a new app showcased by Odense University Hospital called Talk2Care – in short, it enables patients in ICU to ‘talk’ to nurses using an icon-based dashboard on a mobile device. This new way for patients to communicate, who would in some cases only be able to nod or shake their head, has been invaluable not only for nurses but the patient’s family too. What really pleased me was that nurses were actively championing this technology, encouraging patients to utilise it to help nurses deliver a better care experience.

     

    We closed with thoughts on how taking care into the community was being revolutionized by technology. We’ve got some great examples of the role Intel is playing in the advance towards more distributed care, from the use of Intel IoT Gateways to help the elderly live more independent lives at home through to the KU Wellness car which empowers nurses to take advanced care into the community using mobile devices.

     

    Virtual Reality Nursing

    After a short break we returned to the main auditorium where I was pleased to be on stage with nurses from across the world. The future of the workforce was discussed in some detail, particularly around how the nursing and the wider healthcare community will manage the anticipated future global shortage of nurses. Technology will go some way to alleviating this shortfall through improved workflows but I like to think in a more visionary way, perhaps we will see the use of avatars, virtual reality and (thinking of discussions earlier in the day) robots. What’s clear is that nursing is changing in response to the move to distributed care, we need to skill not only nurses but other caregivers too, i.e. families, to make better use of the technology that is available today and tomorrow.

     

    Read more >

    Advice to a Network Admin Seeking a Career in Cybersecurity

    Insider6.jpgEven after nearly 25 years, I continue to be excited and passionate about security.  I enjoy discussing my experiences, opinions, and crazy ideas with the community.  I often respond to questions and comments on my blogs and in LinkedIn, as it is a great platform to share ideas and communicate with others in the industry.  Recently I had responded to a Network Admin seeking a career in cybersecurity.  With their permission, I thought I would share a bit of the discussion as it might be helpful to others.

     

    Mr. Rosenquist – I have been in the Information Technology field as a network administrator for some 16 years and am looking to get into the Cyber Security field but the opportunity for someone that lacks experience in this specialized field is quite difficult. I too recognize the importance of education and believe it is critical to optimum performance in your field. What would your recommendation of suggested potential solutions be to break into this field?  Thank you for your time and expertise.


     

    Glad to hear you want to join the ranks of cybersecurity professionals! The industry needs people like you. You have a number of things going for you. The market is hungry for talent and network administration is a great background for several areas of cybersecurity.

     

    Depending on what you want to do, you can travel down several different paths. If you want to stay in the networking aspects, I would recommend either a certification from SANS (or other reputable training organization with recognizable certifications) or dive into becoming a certified expert for a particular firewall/gateway/VPN product (ex. PaloAlto, CISCO, Check Point, Intel/McAfee, etc.). The former will give you the necessary network security credentials to work on architecture, configuration, analysis, operations, policy generation, audit, and incident response. The latter are in very high demand and specialize in the deployment, configuration, operation, and maintenance of these specific products.  If you want to throw caution to the wind and explore areas outside of your networking experience, you can go for a university degree and/or security credentials. Both is better but may not be necessary.

     

    I recommend you work backwards. Find job postings for your ‘dream job’ and see what the requirements are. Make inquiries about preferred background and experience. This should give you the insights to how best fill your academic foundation.  Hope this helps. – Matthew Rosenquist

     

    The cybersecurity industry is in tremendous need of more people with greater diversity to fill the growing number of open positions.  Recent college graduates, new to the workforce, will play a role in satiating the need, but there remain significant opportunities across a wide range of roles.  Experienced professionals with a technical, investigative, audit, program management, military, and analysis background can pivot into the cybersecurity domain with reasonable effort.  This can be a great prospect for people who are seeking new challenges, very competitive compensation, and excellent growth paths.  The world needs people from a wide range of backgrounds, experiences, and skills to be a part of the next generation of cybersecurity professionals.

     

     

    An open question to my peers; what advice would you give to workers in adjacent fields who are interested in the opportunities of cybersecurity?

     

     

    Interested in more?  Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.

    Read more >

    Momentum Builds for the ‘Cloudification’ of Storage

    iStock_000024629815_Large.jpg

     

    Here’s a prediction for 2016: The year ahead will bring the increasing “cloudification” of enterprise storage. And so will the years that follow—because cloud storage models offer the best hope for the enterprise to deal with unbounded data growth in a cost-effective manner.

     

    In the context of storage, cloudification refers to the disaggregation of applications from the underlying storage infrastructure. Storage arrays that previously operated as silos dedicated to particular applications are treated as a single pool of virtualized storage that can be allocated to any application, anywhere, at any time, all in a cloud-like manner. Basically, cloudification takes today’s storage silos and turns them on their sides.

     

    There are many benefits to this new approach that pools storage resources. In lots of ways, those benefits are similar to the benefits delivered by pools of virtualized servers and virtualized networking resources. For starters, cloudification of storage enables greater IT agility and easier management, because storage resources can now be allocated and managed via a central console. This eliminates the need to coordinate the work of teams of people to configure storage systems in order to deploy or scale an application. What used to take days or weeks can now be done in minutes.

     

    And then there are the all-important financial benefits. A cloud approach to storage can greatly increase the utilization of the underlying storage arrays. And then there are the all-important financial benefits. A cloud approach to storage can greatly increase the utilization of the storage infrastructure; deferring capital outlays and reducing operational costs.

     

    This increased utilization becomes all the more important with ongoing data growth. The old model of continually adding storage arrays to keep pace with data growth and new data retention requirements is no longer sustainable. The costs are simply too high for all those new storage arrays and the data center floor space that they consume. We now have to do more to reclaim the value of the resources we already have in place.

     

    Cloudification isn’t a new concept, of course. The giants of the cloud world—such as Google, Facebook, and Amazon Web Services—have taken this approach from their earliest days. It is one of their keys to delivering high-performance data services at a huge scale and a relatively low cost. What is new is the introduction of cloud storage in enterprise environments. As I noted in my blog on non-volatile memory technologies, today’s cloud service providers are, in effect, showing enterprises the path to more efficient data centers and increased IT agility.

     

    Many vendors are stepping up to help enterprises make the move to on-premises cloud-style storage. Embodiments of the cloudification concept include Google’s GFS and its successor Colossus, Facebook’s HDFS, Microsoft’s Windows Azure Storage (WAS), Red Hat’s Ceph/Rados (and GlusterFS), Nutanix’s Distributed File System (NDFS), among many others.

     

    The Technical View

     

    At this point, I will walk through the architecture of a cloud storage environment, for the benefit of those who want the more technical view.

     

    Regardless of the scale or vendor, most of the implementations share the same storage system architecture. That architecture has three main components: a name service, a two-tiered storage service, and a replicated log service. The architectural drill-down looks like this:

     

    The “name service” is a directory of all the volume instances currently being managed. Volumes are logical data containers, each with a unique name—in other words a namespace of named-objects. A user of storage services attaches to their volume via a directory lookup that resolves the name to the actual data container.

     

    This data container actually resides in a two-tier storage service. The frontend tier is optimized for memory. All requests submitted by end-users are handled by this tier: metadata lookups as well as servicing read requests out of cache and appending write operations to the log.

     

    The backend tier of the storage service provides a device-based, stable store. The tier is composed of a set of device pools, each pool providing a different class of service. Simplistically, one can imagine this backend tier supporting two device pools. One pool provides high performance but has a relatively small amount of capacity. The second pool provides reduced performance but a huge amount of capacity.

     

    Finally, it is important to tease out the frontend tier’s log facility as a distinct, 3rd component. This is because this facility key to being able to support performant write requests while satisfying data availability and durability requirements.

     

    In the weeks ahead, I will take up additional aspects of the cloudification of storage. In the meantime, you can learn about things Intel is doing to enable this new approach to storage at intel.com/storage.

    Read more >

    Intel Supports CONNECT for Health Making Chronic Disease Patients’ Lives Better

    By Alice Borrelli, Global Healthcare Director Tomorrow, Senate and House champions for remote care will take an important step toward making the lives of chronic disease patients better through the introduction of the CONNECT for Health legislation. Working with providers … Read more >

    The post Intel Supports CONNECT for Health Making Chronic Disease Patients’ Lives Better appeared first on Policy@Intel.

    Read more >

    Powerless or Power-Less?

    DCD Magazine, contributed articledatacenter-abstract-graphic.jpg

     

    While every facet of data center management is changing at a rapid pace, operating budgets rarely keep up. Data volume doubles every 18 months and applications every two years; in contrast, operating budgets take eight years to double (IDC Directions, 2014).

     

    IT has always been asked to do more with less, but the dynamic nature of the data center has been accelerating in recent years. Smart devices, big data, virtualization, and the cloud continue to change service delivery models and elevate the importance of flexibility, elasticity, and scalability.

     

    Every facet of data center management, as a result, has been complicated by an incredibly rapid rate of change. Thousands of devices move on and off intranets. Fluid pools of compute resources are automatically allocated. Does this ultra-dynamic environment make it impossible for IT and facilities management teams to identify under-utilized and over-stressed resources?

     

    If so, energy consumption in the data center will continue to skyrocket. And data centers already consume 10 percent of all energy produced around the globe, according to recent Natural Resources Defense Council reports.)

     

    Fortunately, IT is far from powerless even within these challenging data center conditions.

     

    Discovering some secret weapons

     

    Ironically, in today’s data centers consisting of software-defined resources, the secret weapon for curbing energy costs lies in the hardware. Rack and blade servers, switches, power distribution units, and many other data center devices provide a wealth of power and temperature information during operation. Data center scale and the diversity of the hardware make it too cumbersome to manually collect and apply this information, which has led to a growing ecosystem of energy management solution providers.

     

    Data center managers, as a result, have many choices today. They can take advantage of a management console that integrates energy management, have an integrator add energy management middleware to an existing management console, or independently deploy an energy management middleware solution to gain the necessary capabilities.

     

    Regardless of the deployment option, a holistic energy management solution allows IT and facilities teams to view, log, and analyze energy and temperature behaviors throughout the data center. Automatically collected and aggregated power and thermal data can drive graphical maps of each room in a data center, and data can be analyzed to identify trends and understand workloads and other variables.

     

    Visibility and the ability to log energy information equips data center managers to answer basic questions about consumption, and make better decisions relating to data center planning and optimization efforts.

     

    Best-in-class energy management solutions take optimization to a higher level by combining automated monitoring and logging with real-time control capabilities. For example, thresholds can be set to cap power for certain servers or racks at appropriate times or when conditions warrant. Servers that are idle for longer than a specified time can be put into power-conserving sleep modes. Power can be allocated based on business priorities, or to extend the life of back-up power during an outage. Server clock rates can even be adjusted dynamically to lower power consumption without negatively impacting service levels or application performance.

     

    Energy-conscious data centers take advantage of these capabilities to meet a broad range of operating objectives including accurate capacity planning, operating cost reduction, extending the life of data center equipment, and compliance with “green” initiatives.

     

    Common uses and proven results

     

    Customer deployments highlight several common motivations, and provide insights in terms of the types and scale of results that can be achieved with a holistic energy management solution and associated best practices.

     

    • Power monitoring. Identifying and understanding peak periods of power use motivate many companies to introduce an energy management solution. The insights gained have allowed customers to reduce usage by more than 15 percent during peak hours, and to reduce monthly data center utility bills even as demands for power during peak periods goes up. Power monitoring is also being applied to accurately charge co-location and other service users.

    • Increasing rack densities. Floor space is another limiting factor for scaling up many data centers. Without real-time information, static provisioning has traditionally relied upon power supply ratings or derated levels based on lab measurements. Real-time power monitoring typically proves that the actual power draw comes in much lower. With the addition of monitoring and power capping, data centers can more aggressively provision racks and drive up densities by 60 to more than 80 percent within the same power envelope.

    • Identifying idle or under-used servers. “Ghost” servers draw as much as half of the power used during peak workloads. Energy management solutions have shown that 10 to 15 percent of servers fall into this category at any point in time, and help data center managers better consolidate and virtualize to avoid this wasted energy and space.

    • Early identification of potential failures. Besides monitoring and automatically generating alerts for dangerous thermal hot spots, power monitoring and controls can extend UPS uptime by up to 15 percent and prolong business continuity by up to 25 percent during power outages.

    • Advanced thermal control. Real-time thermal data collection can drive intuitive heat maps of the data center without adding expensive thermal sensors. Thermal maps can be used to dramatically improve oversight and fine-grained monitoring (from floor level to device level). The maps also improve capacity planning, and help avoid under- and over-cooling. With the improved visibility and threshold setting, data center managers can also confidently increase ambient operating temperatures. Every one-degree increase translates to 5 to 10 percent savings in cooling costs.

    • Balancing power and performance. Trading off raw processor speed for smarter processor design has allowed data centers to decrease power by 15 to 25 percent with little or no impact on performance.

    Time to get serious about power

     

    Bottom line, data center hardware still matters. The constantly evolving software approaches for mapping resources to applications and services calls for real-time, fine-grained monitoring of the hardware. Energy management solutions make it possible to introduce this monitoring, along with power and thermal knobs that put IT and facilities in control of energy resources that already account for the largest line item on the operating budget.

     

    Software and middleware solutions that allow data center managers to keep their eyes on the hardware and the environmental conditions let automation move ahead full speed, safely, and affordably – without skyrocketing utility bills. Power-aware VM migration and job scheduling should be the standard practice in today’s power-hungry data centers.

    Read more >

    Intel’s Secret Cybersecurity Advantage

    2016 Intel IT cybersec 2.jpgIntel Corporation has a secret advantage to protect itself from cyber threats; a world class Information Technology (IT) shop. The 2015-2016 Intel IT Annual Performance Report showcases the depth of security, operational efficiency, and innovation to deliver robust IT services for business value.

     

    IT is typically a thankless job, relegated to the back office, data centers, network rooms, and call-center cubicles.  Although not a profit center, Intel IT has an important role in the daily business of a global technology innovation and manufacturing giant.  Cybersecurity is an integral part of IT’s job to keep Intel running. 

     

    2016 Intel IT cybersec 3.jpg

     

    I spent many years working within Intel IT and security operations.  I can say with confidence it is one of the best IT shops in the industry.  The proof is in the report.

     

    Intel IT has produced an annual report for many years to highlight their efforts to enable growth of the business, improve productivity of employees, manage costs, and protect the confidentiality of data and availability of critical systems.

     

    This year is no different.  Intel has a massive network and digital net-worth to protect.  Intel presents a big target to attackers, both internal and external, and must defend itself with industry best practices.  In many cases it is involved in the development and proof-of-concept testing with the Intel Security solutions teams to vet products and request needed capabilities in response to new threats. 

     

    Here is a quick rundown of how Intel IT security stands guard.  Security supports over 100k employees in 72 countries, at 153 sites.  They are charged in protecting networks, clouds, servers, storage, PC’s, tablets, phones, and all the applications connecting them.  They must protect very sensitive silicon manufacturing, assembly, and test facilities where robots, chemicals, and people are making the magic of computer chips a reality.

     

    Every day about 13 billion events get logged in the security tools.  These are critical to detect threats and attacks.  In the past year, 225 million pieces of malware were blocked from infecting Intel’s networks and computers.  Keeping systems patched and squashing vulnerabilities is a huge and constant job.  Over 12 million security events were addressed to close system vulnerabilities.  Intel IT security systems, people, and management are vigilant and focused in their role.  More importantly, they and the fellow employees they serve understand the value of their contribution, security policies, and continual awareness training.  Security is a tremendously big job, but when management, employees and security professionals work as a team, incredible results are possible.

     

    The 2015-2016 Intel IT Annual Performance Report can be downloaded for free here: http://www.intel.com/content/dam/www/public/us/en/documents/best-practices/intel-it-annual-performance-report-2015-16-paper.pdf

     

     

    Interested in more?  Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.

    Read more >

    New Meaningful Use 3 Requirement: Inclusion of Patient Generated Health Data

    In December, Centers for Medicare & Medicaid Services (CMS) announced final rules of Meaningful Use 3 (MU3)—the third and final iteration of the Meaningful Use Program. The principal goal of this incentive program is to ensure that electronic health records are being used by providers in a way that improves quality of care (e.g., used for e-prescribing, or for submission of clinical quality measures). MU requires providers to meet these criteria in order to receive incentive payments and avoid downward reimbursement adjustments.

     

    As part of MU3, eligible providers will be required to integrate Patient Generated Health Data (PGHD) with clinical data in the EHR for at least 5 percent of the patient population. PGHD includes any data that is generated outside of the clinical setting. Examples include data captured by a device such as a smart phone, or self-reported data (e.g., diet, functional status, emotion well-being) that is manually recorded by the patient. The patient both captures and transfers that data to the provider.

     

    Inclusion of PGHD in this third and final phase of Meaning Use is exciting for several reasons  

    First, this new rule has potential to incentivize providers to invest in the technology and infrastructure (e.g., data storage and security) that will support the integration and use of this data, which to date has not been systematically incorporated into routine patient care.

     

    Second, this new rule coincides with the rapidly growing wearable device market and consumer use of these devices that allows patients to capture their own health data outside of the clinic or hospital setting. Integrating these data points with clinical data and allowing providers to use these data at the point of care will contribute to patient engagement, patient activation, and self-management.

     

    Third, at the policy level, this is likely to drive interoperability and data security standards, which could have broader and positive implications for other types of healthcare data and analytics.

     

    How should providers prepare?

    This new ruling will go into effect in 2018, thus giving providers time to make changes to current EMRs and technology that will support the use of the transfer, use, and storage of this data.

     

    At Intel, we are working to advance these goals through data security efforts, big data analytics, data storage capabilities, and wearable devices that promote and support PGHD.

     

    One such initiative within Intel Health & Life Sciences involves Big Cloud Analytics and its COVALENCE Health Analytics Platform. The COVALENCE Health Analytics Platform is powered by Intel Xeon processor-based servers in the cloud, which ensures a secure, reliable, and scalable infrastructure. Big Cloud Analytics utilizes the Basis Peak watch, which provides 24×7 real-time heart rate monitoring, and supplies metrics for sleep patterns, steps taken, skin temperature, and perspiration. It collects readings on 50 biometric data points every 60 seconds and syncs the data security with the Basis Cloud.  This allows insurance providers, healthcare institutions, and employers to securely use wearable device data to engage patients with event-triggered personalized messaging.

     

    Biometric sensor data gathered from the device is also transmitted to the cloud or on premise data storage and aggregated in the COVALENCE Health Analytics Platform. This platform transforms data into business intelligence and predictive analytics. It then generates wellness scores, bio-identity scores, and many others. Insights based on analysis of the data points and trends provide an early indication of potential health issues or lack of progress toward health goals.

     

    PGHD as part of routine care: opportunities and challenges

    While PGHD will substantially increase the number of data points that can inform healthcare and lead to new insights, we recognize that operationalizing the transmission and use of PGHD will not happen instantly, nor effortlessly. Many questions remain as to how this data will be most effectively used by providers and patients. For example, what is relevant data?  How should providers communicate this to patients so that the appropriate data can be collected and transferred? How much data will providers want and need to obtain in order to make this data useful for patient care? How often will providers want to see this data? How might this influx of data affect staff or clinic workflows? From a user experience perspective, how will this data be best displayed so that providers and patients alike can act upon it? Perhaps further research, particularly ethnographic research that takes into account both the clinician and patient perspective, is needed if we are to use this data in way that translates to better patient outcomes.

    Read more >

    Does Your IT Enterprise Have an Internet of Things (IoT) Strategy?

    The Internet of Things (IOT) – the technology world is abuzz about it. Today, more objects are connected to the Internet than there are people in the world, and the number of IoT objects is expected to grow to between 25 and 50 billion by 2020, depending on which analyst you read. Security cameras, household and business appliances, lighting fixtures, climate control systems, water treatment plants, cars, traffic lights, fetal heart monitors, and power plant controls, just to name a few—the opportunities to collect and use data are seemingly endless.

     

    What does all this mean for enterprise IT?

     

    As the Industry Engagement Manager in Intel IT EMEA (Europe, Middle East, and Africa), I’m working with my colleagues to implement IoT solutions that align to our IT IoT strategy.

     

    Why does IT need an IoT strategy you ask? Well, considering the growth of IoT solutions over time, business groups in many companies are looking for their own IoT point solution to solve their problems. They may not come to IT for the solution. Many may implement differing solutions, such as sensors, gateways, protocols, and platforms. This is not the most cost-effective or efficient approach for Intel for many reasons. Who will manage and support these point solutions over time? Who will verify that they are secure? Can the network handle the new data volume growth? If the solution “breaks,” will the business groups ask IT to fix them – if so, IT may inherit solutions that do not match the enterprise architecture needs and therefore may need to be redesigned, replaced, or both. These are all important questions, and there are many other reasons to have an IoT strategy, which I will discuss in a future blog. As IT professionals, we need to have a seat at the table when IoT solutions are being defined, to ensure the business gets it right the first time.

     

    We recently published a white paper, “Integrating IoT Sensor Technology into the Enterprise,” describing the best practices we have developed relating to IoT. The paper shares the 14 best practices we use that enable us to successfully launch an IoT project. We hope that by sharing these best practices, we can help others to also successfully implement IoT solutions in their enterprise.

     

    What we’ve learned is that once you’ve defined the process, IoT projects can be implemented in an efficient and streamlined fashion. Each step does require some effort to define its requirements, but once the steps are defined, they can be used in a repeatable manner. To summarize our defined best practices, here’s the flow:

     

    Blog-Roadmap.pngPre-Explore the Technology and Concept

    • Best Practice #1: Build an IoT Team
    • Best Practice #2: Define the IoT System

     

    Explore the Project Feasibility and Value

    • Best Practice #3: Determine the Business Value
    • Best Practice #4: Acquire Stakeholder Agreement and Funding

     

    Plan and Scope the Project

    • Best Practice #5: Classify the Sensor Data
    • Best Practice #6: Design the Network Infrastructure and Choose IoT Devices
    • Best Practice #7: Review Environmental Conditions
    • Best Practice #8: Define Space and Electrical Power Needs

     

    Develop and Deploy the IoT System

    • Best Practice #9: Secure the IoT Devices and Data
    • Best Practice #10: Align with Privacy and Corporate Governance Policies
    • Best Practice #11: Design for Scalability
    • Best Practice #12: Integrate and Manage the IoT Devices
    • Best Practice #13: Establish a Support Model
    • Best Practice #14: Plan the Resources

     

    Using these best practices, we’ve done many IoT proofs of concept across our enterprise, using components of the Intel® IoT Platform and the Intel® Intelligent Systems Framework. Over time we are adding elements to the Intel IoT platform deployed in our environment. We are currently using many aspects of the Intel IoT platform, and so are other companies, and they are turning to Intel for advice on how best to implement their IoT solutions. For example, Siemens has adopted Intel’s IoT hardware and software stack for their smart parking initiative.

     

    Our mission is to standardize on an end-to-end Intel IoT Platform-based solution that meets the wide and varied IoT needs of Intel, a global organization. Intel IT wants to transform the business by providing the IoT “plumbing” – that is, the platform building blocks – that enable Intel’s business groups to easily deploy IoT solutions when they need them.

     

    Examples of IoT technology enabled by Intel include the Intel® Quark™ microcontroller D2000, Intel Gateways featuring Intel® Quark™ or Intel® Atom™ processors, and Intel® Security (McAfee) solutions integrated within our Wind River OS (Linux* or RTOS-VxWorks*). Wind River, which is a wholly owned subsidiary of Intel, also has an edge management solution for centrally managing edge devices, known as Helix* Device Cloud.

     

    The IoT projects that we’ve done so far have shown great promise, and have resulted in significant ROI. Fully integrating the IoT into the enterprise isn’t an overnight project – it’s a continual journey and is a significant change in how business is done. But putting the building blocks in place now will make the journey shorter and easier, and will enable Intel to fully realize the business value of the IoT. You can learn more about our other IOT projects by reviewing our recently published 2015-2016 Intel IT Annual Performance Report.

     

    I’d be interested to hear what other enterprise IT professionals are doing with IoT. Please share your thoughts and experiences by leaving a comment below – I look forward to the conversation!

    Read more >