10 Mobile BI Strategy Questions: Enterprise Mobility

Two-Men-Looking-At-A-Tablet.jpgIs your mobile business intelligence (BI) strategy aligned with your organization’s enterprise mobility strategy? If you’re not sure what this means, you’re in big trouble. In its simplest form, enterprise mobility can be considered a framework to maximize the use of mobile devices, wireless networks, and all other related services in order to drive growth and profitability. However, it goes beyond just the mobile devices or the software that runs on them to include people and processes.


It goes without saying that enterprise mobility should exist in some shape or form before we can talk about mobile BI strategy, even if the mobile BI engagement happens to be the first pilot planned as a mobility project. Therefore, an enterprise mobility roadmap serves as both a prerequisite for mobile BI execution and as the foundation on which it relies.


When the development of a successful mobile BI strategy is closely aligned with the enterprise mobility strategy, the company benefits from the resulting cost savings, improvement in the execution of the mobile strategy, and increased value.


Alignment with Enterprise Mobility Results in Cost Savings


Although mobile BI will inherit most of its rules for data and reports from the underlying BI framework, the many components that it relies on during execution will be dependent on the enterprise rules or lack thereof. For example, the devices on which the mobile BI assets (reports) are consumed will be offered and supported as part of an enterprise mobility management system, including bring-your-own-device (BYOD) arrangements. Therefore, operating outside of these boundaries could not only be costly to the organization but it could also result in legal and compliance concerns.


Whether the mobile BI solutions are built in-house or purchased, as with any other technology initiative, it doesn’t make any sense to reinvent the wheel. Existing contracts with software and hardware vendors could offer major cost savings. Moreover, fragmented approaches in delivering the same requirement for multiple groups and/or for the same functionality won’t be a good use of scarce resources.


For example, building forecast reports for sales managers within the customer relationship management (CRM) system and forecast reports developed on the mobile BI platform may offer the same or similar functionality and content, resulting in confusion and duplicate efforts.


Leveraging Enterprise Mobility Leads to Improved Execution


If you think about it, execution of the mobile BI strategy can be improved in all aspects if an enterprise mobility framework exists that can be leveraged. The organization’s technology and support infrastructure (two topics I will discuss later in this series) are the obvious ones worth noting. Consider this—how can you guarantee an effective delivery of BI content when you rollout to thousands of users without having a robust mobile device support infrastructure?


If we arm our sales force with mobile devices around the same time we plan to deliver our first set of mobile BI assets, we can’t expect flawless execution and increased adoption. What if the users have difficulty setting up their devices and have nowhere to turn for immediate and effective support?


Enterprise Mobility Provides Increased Value for Mobile BI Solutions


By aligning our mobility BI strategy with our organization’s enterprise mobility framework, we not only increase our chances of success, but, most importantly, we have the opportunity to provide increased value beyond pretty reports with colorful charts and tables. This increased value means that we can deliver an end-to-end solution even though we may not be responsible for them under the BI umbrella. Enterprise mobility components such as connectivity, device security, or management contribute to a connected delivery system that mobile BI will share.


Bottom Line: Enterprise Mobility Plays an Important Role


Enterprise mobility will influence many of mobile BI’s success criteria. When we’re developing a mobile BI strategy, we need to stay not only in close alignment with the enterprise mobility strategy so we can take advantage of the synergies that exist, but also consider the potential gaps that we may have to address if the roadmap does not provide timely solutions.


How do you see enterprise mobility influencing your mobile BI execution?


Stay tuned for my next blog in the Mobile BI Strategy series.


Connect with me on Twitter at @KaanTurnali and LinkedIn.


This story originally appeared on the SAP Analytics Blog.

Read more >

Intel France Collaborates with Teratec to Open Big Data Lab

By Valère Dussaux

Here at Intel in France, we recently announced a collaboration with the European-based Teratec consortium to help unlock new insights into sustainable cities, precision agriculture and personalized medicine. These three themes are closely interlinked because each of them requires significant high performance computing power and big data analysis.


Providing Technology and Knowledge

The Teratec campus, located south of Paris, is comprised of more than 80 organisations from the world of commerce and academia. It’s a fantastic opportunity for us at Intel to provide our expertise not only in the form of servers, networking solutions and big data analytics software but also by utilising the skills and knowledge of our data scientists who will work closely with other scientists on the vast science and technology park.


The big data lab will be our principal lab for Europe and will initially be focused on proof of concept works with our first project being in the area of precision agriculture. As we progress techniques we will bring the learnings into the personalized medicine arena where one of our big focuses is the analysis of merged clinical data and genomic data that are currently stored in silos as we seek to advance the processing of unstructured data.


Additionally we will also be focusing on the analysis of merged clinical data and open data like weather, traffic and other publically available data in order to help healthcare organizations to enhance resource allocation and health insurers and payers to build sustainable healthcare systems.


Lab makes Global impact

You may be asking why Intel is opening up a big data lab in France. Well, the work we will be undertaking at Teratec will not only benefit colleagues and partners in France or Europe, but globally too. The challenges we all face as a collective around an ageing population and movement of people towards big cities present unique problems, with healthcare very much towards the top of that list. And France presents a great environment for innovation, especially in the 3 focus areas, as the Government here is in the process of promulgating a set of laws that will really help build a data society.


I highly recommend taking time to read about some of the healthcare concepts drawn up by students on the Intel-sponsored Innovation Design and Engineering Master programme in conjunction between Imperial College and the Royal College of Arts (RCA) in our ‘Future Health, Future Cities’ series of blogs. For sustainable cities, the work done at Teratec will allow us to predict trends and help mitigate the risks associated with the expectation that more than 2/3rds of the world’s population living in big cities by 2050.


So far, we have seen research into solutions curtailed from both a technical and knowledge aspect but we look forward to overcoming these challenges with partners at Teratec in the coming years. We know there are significant breakthroughs to be made as we push towards providing personalized medicine at the bedside. Only then can we truly say we are forging ahead to build a future for healthcare that matches the future demands of our cities.


Further Reading:



We’d love to keep in touch with you about the latest insight and trends in Health IT so please drop your details here to receive our quarterly newsletter.

Read more >

New HP and Intel Alliance to Optimize HPC Workload Performance for Targeted Industries

HP and Intel are again joining forces to develop and deliver industry-specific solutions with targeted workload optimization and deep domain expertise to meet the unique needs of High Performance Computing (HPC) customers. These solutions will leverage Intel’s HPC scalable system framework and HP’s solution framework for HPC to take HPC mainstream.


HP systems innovation augments Intel’s chip capabilities with end-to-end systems integration, density optimization and energy efficiency built into each HP Apollo platform.  HP’s solution framework for HPC optimizes workload performance for targeted vertical industries.  HP offers clients Solutions Reference Architectures that deliver the ability to process, analyze and manage data while addressing the complex requirements across a variety of industries including Oil and Gas, Financial Services and Life Sciences.  With HP HPC solutions customers can address their need for HPC innovation with an infrastructure that delivers the right Compute for the right workload at the right economics…every time!


In addition to combining Intel’s HPC scalable system framework with HP solutions framework for HPC   to develop HPC optimized solutions, the HPC Alliance goes a step further, by introducing a new Center of Excellence (CoE) specifically designed to spur customer innovation.  This CoE effectively combines deep vertical industry expertise and technological understanding with the appropriate tools, services and support. This approach makes it simple and easy for our customers to drive innovation with HPC. This service is open to all HPC customers from academia to industry.


Today, in Grenoble, France, customers have access to HP and Intel engineers at the HP and Intel Solutions Center.  Clients can conduct a Proof of Concept using the latest in HP and Intel technologies.  Furthermore, HP and Intel engineers stand ready to help customers modernize their codes to take advantage of new technologies….resulting in faster performance, improved efficiencies, and ultimately better business outcomes.



HP and Intel will make the HPC Alliance announcement at ISC’15 in Frankfurt, Germany July 12-16, 2015. To learn more, visit and search ‘high performance computing’.

Read more >

Intel Rolls Out Enhanced Lustre* File System

For High Performance Computing (HPC) users who leverage open-source Lustre* software, a good file system for big data is now getting even better. That’s a key takeaway from announcements Intel is making this week at ISC15 in Frankfurt, Germany.


Building on its substantial contributions to the Lustre community, Intel is rolling out new features that will make the file system more scalable, easier to use, and more accessible to enterprise customers. These features, incorporated in Intel® Enterprise Edition for Lustre* 2.3, include support for Multiple Metadata Targets in the Intel® Manager for Lustre* GUI.


The Multiple Metadata Target feature allows Lustre metadata to be distributed across servers. Intel Enterprise Edition for Lustre 2.3 supports remote directories, which allow each metadata target to serve a discrete sub-directory within the file system name space. This enables the size of the Lustre namespace and metadata throughput to scale with demand and provide dedicated metadata servers for projects, departments, or specific workloads.


This latest Enterprise Edition for Lustre release supports clients running Red Hat Enterprise Linux (RHEL) 7.1 as well as client nodes running SUSE Linux Enterprise 12.


The announcements don’t stop there. Looking ahead a bit, Intel is preparing to roll out new security, disaster recovery, and enhanced support features in Intel® Cloud Edition for Lustre 1.2, which will arrive later this year. Here’s a quick look at these coming enhancements:


  • Enhanced security— The new version of Cloud Edition adds network encryption using IPSec to provide enhanced security. This feature can be automatically configured to ensure that the communication of important data is always secure within the file system, and when combined with EBS encryption (released in version 1.1.1 of Cloud Edition) provides a complete and robust end-to-end security solution for cloud-based I/O.
  • Disaster recovery—Existing support for EBS snapshots is being expanded to support the recovery of a complete file system. This feature enhances file system durability and increases the likelihood of recovering important data in the case of failure or data corruption.
  • Supportability enhancements—Cloud Edition supportability has been enhanced with the addition of client mounting tools, updates to instance and target naming, and added network testing tools. These changes provide a more robust framework for administrators to deploy, manage, and troubleshoot issues when running Cloud Edition.


Making adoption and use of Lustre easier for organizations is a key driver behind the Intel Manager for Lustre software. This management interface includes easy-to-use tools that provide a unified view of Lustre storage systems and simplify the installation, configuration, monitoring, and overall management of the software. Even better, the Intel Distribution includes an integrated adapter for Apache Hadoop*, which enables users to operate both Lustre and Apache Hadoop within a shared HPC infrastructure.


Enhancements to the Intel Distributions for Lustre software products are a reflection of Intel’s commitment to making HPC and big data solutions more accessible to both traditional HPC users and mainstream enterprises. This commitment to the HPC and big data space is also evident in Intel’s HPC scalable system framework. The framework, which leverages a collection of leading-edge technologies, enables balanced, power-efficient systems that can support both compute- and data-intensive workloads running on the latest Intel® Xeon processors and Intel® Xeon Phi™ coprocessors.


For a closer look at these topics, visit Intel Solutions for Lustre Software and Intel’s HPC Scalable System Framework.




Intel, the Intel logo, Xeon and Xeon Phi are trademarks of Intel Corporation in the United States and other countries.
* Other names and brands may be claimed as the property of others.

Read more >

How eHarmony Makes Matches in the Cloud with Big Data Analytics

For millions of years, humans searched for the right person to love based on emotion, intuition, and a good bit of pure luck. Today, it’s much easier to find your soul mate using the power of big data analytics.


Analyzing Successful RelationshipseHarmony_Tweet.jpg


This scientific approach to matchmaking is clearly successful. On average, 438 people in the United States get married every day because of eHarmony. That’s the equivalent of nearly four percent of new marriages.  

Navigating an Ocean of Data

To keep up with its fast-growing demand, eHarmony needed to boost its analytics capabilities and upgrade its cloud environment to support a new software framework. It also needed a solution that was scalable to keep up with tomorrow’s needs.

Robust Private Cloud Environment

eHarmony built a new private cloud environment that lets it process affinity matching and conduct machine learning research to help refine the matching process. The cloud is built on Cloudera CDH software—an Apache Hadoop software distribution that enables scalable storage and distributed computing while providing a user interface and a range of enterprise capabilities. 

The infrastructure also includes servers equipped with the Intel® Xeon® processor E5 v2 and E5 v3 families. eHarmony chose the Intel Xeon processors because they had the performance it needed plus large-scale memory capacity to support the memory-intensive cloud environment. eHarmony’s software developers also use Intel® Threading Building Blocks (Intel® TBB) to help optimize new code.  

More andMore AccurateResults

This powerful new cloud environment can help eHarmony accommodate more complex analyses—and ultimately produce more personalized matches that improve the likelihood of relationship success. It can also analyze more data faster than before and deliver within overnight processing windows.  

Going forward, eHarmony is ready to handle the fast-growing volume and variety of user information it takes to match millions of users every day. 

You can take a look at the eHarmony solution here or read more about it here. To explore more technology success stories, visit and follow us on Twitter.


Read more >

Security on the Frontlines of Healthcare


I recently had the privilege of interviewing Daniel Dura, CTO of Graphium Health recently on the subject of security on the frontlines of healthcare, and a few key themes emerged that I want to highlight and elaborate on below.


Regulatory compliance is necessary but not sufficient for effective security and breach risk mitigation. To effectively secure healthcare organizations against breaches and other security risks one needs to start with understanding the sensitive healthcare data at risk. Where is it at rest (inventory) and how is it moving over the network (inventory), and how sensitive is it (classification)? These seem like simple questions, but in practice are difficult to answer, especially with BYOD, apps, social media, consumer health, wearables, Internet of Things etc driving increased variety, volume and velocity (near real-time) sensitive healthcare data into healthcare organizations.


There are different types of breaches. Cybercrime type breaches have hit the news recently. Many other breaches are caused by loss or theft of mobile devices or media, insider risks such as accidents or workarounds, breaches caused by business associates or sub-contracted data processors, or malicious insiders either snooping records or committing fraud. Effective security requires avoiding distraction from the latest media, understanding the various types of breaches holistically, which ones are the greatest risks for your organization, and how to direct limited budget and resources available for security to do the most good in mitigating the most likely and impactful risks.


Usability is key. Healthcare workers have many more information technology tools now than 10 years ago and if usability is lacking in healthcare solutions or security it can directly drive the use of workarounds, non-compliance with policy, and additional risks that can lead to breaches. The challenge is to provide security together with improved usability. Examples include software encryption with hardware acceleration, SSD’s with encryption, or multi-factor authentication that improves usability of solutions and security.


Security is everyone’s job. Healthcare workers are increasingly targeted in spear phishing attacks. Effective mitigation of this type of risk requires a cultural shift so that security is not only the job of the security team but everyone’s job. Security awareness training needs to be on the job, gamified, continuous, and meaningful.


I’m curious what types of security concerns and risks are top of mind in your organization, challenges you are seeing in addressing these, and thoughts on how best to mitigate?

Read more >

Not Your Father’s Client Computing Environment

There’s an old joke about Model-Ts – you could get them in any color you wanted, as long as you wanted black. That’s sort of how enterprise client computing felt 15 years ago: Here’s your monitor, there’s your CPU tower with a bunch of cables. One size fits all. As Client Product Manager at Intel Corp., I can tell you that nothing could be further from the truth today. My job is to develop and execute our IT client computing strategy at Intel, including recommended refresh cycles, procurement, and platform offerings and management for Intel employees.


Just as cars now come in all colors, shapes, and sizes, client computing has evolved far beyond the traditional monitor and tower. Technology, how people use it, and the processes we implement to manage it have all undergone significant transformations. Intel IT has evolved from the “one size fits all” approach to client computing and now offers multiple technology choices so that employees can select a device that best suits their way of working and their job requirements.


The “PC fleet” at Intel is now the “client computing fleet” and encompasses many form factors. The mobile workforce movement ushered in laptops, and in recent years the consumerization of IT has sparked huge growth in the bring-your-own-device arena. Moore’s law continues to rule, enabling people to do more and more with smaller and smaller devices. At Intel, we’re seeing a continual rise in 2-in-1 and tablet usage for certain segments of the employee population.



But one of the most exciting areas of client computing at Intel is desktop computing. As described in a recent IT@Intel white paper, the familiar “desktop”-class PC continues to fill an important role, but desktop computing as a whole has morphed beyond the desk. New form factors are demonstrating their relevance to enterprise client computing. Here are a few examples of form factors we are putting to use at Intel:

  • Mini PCs. The Intel® NUC (Next Unit of Computing) is a good example of a mini PC – an energy-efficient, fully functioning small form factor PC. Some can literally fit in the palm of your hand.
  • ComputeStick.jpgAll-in-one PCs. This form factor integrates the system’s internal components into the same case as the display, eliminating some connecting cables and allowing for a smaller footprint. Less clutter, touch capabilities, and desktop performance are just some of the advantages that AIOs offer.
  • Compute sticks. Continuing the Moore’s law phenomenon, compute sticks are PCs that can fit in your pocket, providing the capability to turn any HDMI* display (think TV, digital sign, whatever) into a PC, running either Windows* 8.1 with Bing* or Ubuntu* 14.04 LTS.


These “stationary computing devices” can be used in a variety of enterprise settings. Mini PCs can power digital signage and conference room collaboration. All-in-ones bring the power of touch to the desktop and are particularly useful in public settings such as kiosks and lobbies. Compute sticks combine the ease of mobility with the powerful computing capabilities associated with traditional desktop PCs. You can read more about these use cases in our recent white paper “The Relevance of Desktop Computing in a Mobile Enterprise.”


PRC6_Office_AllInOne_Man.jpgTraditional desktop PCs are not remaining static either. In particular, I’m interested in the increasing wireless capabilities of desktop PCs using PCI-Express* (PCIe*). A single PCIe lane can transfer 200 MB of traffic in each direction per second – a significant improvement over standard PCI connections.


Intel IT is actively preparing for a future workplace that incorporates many form factors as well as many input methods, including touch, voice, sensor, and gesture. We are transitioning from the traditional IT model of end-device control and management to a device-independent, services-based model. We are also revisiting our client computing management practices, procurement processes, and other aspects of client management. I hope to address these topics in future blogs. In the meantime, I’d appreciate hearing from readers. How is client computing changing in your organization? How are you adapting to these changes? Please share your thoughts and insights with me – and other IT professionals – by leaving a comment below. Join our conversation on the IT Peer Network.

Read more >

Turbo-Charging the Software Defined Infrastructure

Man-Looking-At-Tablet-In-Server-Room.pngToday, I’d like to take a peek at what’s around the corner, so to speak, and put the spotlight on a new and exciting area of development. We’ve spent some time in this blog series exploring Software Defined Infrastructure (SDI) and its role in the journey to the hybrid cloud. We’ve looked at what’s possible now and how organisations early to the game have started to use technologies like orchestration layers and telemetry to increase agility whilst driving time, cost and labour out of their data centres. But where’s it all going next?


One innovation that we’re just on the cusp of is server disaggregation and composable resources (catchy, huh?). As with much of the innovation I’ve spoken about during this blog series, this is about ensuring the datacentre infrastructure is architected to best serve the needs of the software applications that run upon it. Consider the Facebooks*, Googles* and Twitters* of the world – hyper-scale cloud service providers (CSPs), running hyper-scale workloads. In the traditional Enterprise, software architecture is often based on virtualisation – allocating one virtual machine (VM) to one application instance as demand requires. But, what happens when this software/hardware model simply isn’t practical?


This is the ‘hyper-scale’ challenge faced by many CSPs. When operating at hyper-scale, response times are achieved by distributing workloads over many thousands of server nodes concurrently, hence a software architecture designed to run on a ‘compute grid’ is used to meet scale and flexibility demands. An example of this is the Map Reduce algorithm, used to process terabytes of data across thousands of nodes.


However, along with this comes the requirement to add capacity at breath-taking pace whilst simultaneously achieving previously unheard of levels of density to maximise on space usage. Building new datacentres, or ‘pouring concrete’, is not cheap and can adversely affect service economics for a CSP.

Mix-and-Match Cloud Components


So, what’s the ‘The Big Idea’ with server disaggregation and composable resources?


Consider this: What if you could split all the servers in a rack into their component parts, then mix and match them on-demand in whatever configuration you need in order for your application to run at its best?


Let me illustrate this concept with a couple of examples. Firstly, consider a cloud service provider with users uploading in excess of 50 million photographs a day. Can you imagine the scale on which infrastructure has to be provisioned to keep up? In addition, hardly any of these pictures will be accessed after initial viewing! In this instance, the CSP could dynamically aggregate, say, lower power Intel® Atom™ processors with cheap, high capacity hard drives to create economically appropriate cold storage for infrequently accessed media.


Alternatively, a CSP may be offering a cloud-based analytics service. In this case, the workload could require aggregation of high performance CPUs coupled with high bandwidth I/O and solid state storage – all dynamically assembled, from disaggregated components, on-demand.

Data-Center-Future-Rack-Scale-Architecture.pngThe Infinite Jigsaw Puzzle


This approach, the dynamic assembly of composable resources, is what Intel terms Rack Scale Architecture (RSA).


RSA defines a set of composable infrastructure resources contained in separate, customisable ‘drawers’. There are separate drawers for different resources – compute, memory, storage – like a giant electronic pick-and-mix counter. A top-of-rack switch then uses silicon photonics to dynamically connect the components together to create a physical server on demand. Groups of racks – known as pods – can be managed and allocated on the fly using our old friend the orchestration layer. When application requirements change, the components can be disbanded and recombined into infrastructure configuration as needed – like having a set of jigsaw puzzle pieces that can be put together in infinite ways to create a different picture each time.


Aside from the fun of all the creative possibilities, there are a lot of benefits to this type of approach:


  • Using silicon photonics, which transmits information by laser rather than by physical cable, means expensive cabling can be reduced by as much as three times1.
  • Server density can be increased by 1.5x and power provisioning reduced by up to six times1.
  • Network uplink can be increased by 2.5x and network downlink by as much as 25 times1.


All this means you can make optimal use of your resources and achieve granular control with high-level management. If you want to have a drawer of Intel Atom processors and another of Intel Xeon processors to give you compute flexibility, you can. Want the option of using disk or SSD storage? No problem. And want to be able to manage it all at the pod level with time left over to focus on the more innovative stuff with your data centre team? You got it.


Some of these disaggregated rack projects are already underway. You may, for instance have heard of Project Scorpio initiatives in China, and Facebook’s Open Compute Project.

All this is a great example of how the software-defined infrastructure can help drive time, cost and labour out of the data centre whilst increasing business agility, and will continue to do so as the technology evolves. Next time, we’ll be looking into how the network fits into SDI, but for now do let me know what you think of the composable resource approach. What would it mean for your data centre, and your business?


1 Improvement  based on standard  rack  with 40 DP servers, 48  port ToR switch, 1GE downlink/server and 4 x10GE uplinks,  Cables: 40 downlink and 4 uplink vs . rack with 42 DP servers, SiPh patch panel, 25Gb/s downlink, 100Gb/s uplink, , Cables: 14 optical downlink, and 1 optical uplink. Actual improvement will vary depending on configuration and actual  implementation.

Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase.  For more complete information about performance and benchmark results, visit

Read more >

Multiple Alarms Feature

Hello, I am trying to remotely configure 150ish PC’s with multiple AMT Alarms.  I can see from this webpage that AMT 8.0 and later supports the Multiple Alarm Feature, and all of our machines are 8.1 or newer.  I have successfully created ind… Read more >

Exploring Intel’s HPC technology advancements at the ISC High Performance conference in Frankfurt

Next week will kick off the ISC High Performance conference, July 12 – 16 in Frankfurt, Germany. I will be joining many friends and peers across the HPC industry to share, collaborate, and learn about the advancements in High Performance Computing and Big Data.


During this international gathering, Intel’s Raj Hazra, our VP and GM of the Enterprise and HPC Platform Group, will speak about the changing landscape of technical computing and show how recent innovations and Intel’s HPC scalable system framework can help scientists, researchers, and industry maximize the potential of HPC for computation and data intensive workloads. Raj will also share details on upcoming Intel technologies, products, and ecosystem collaborations that are powering research-driven breakthroughs and ensuring that technical computing continues to fulfill its potential as a scientific and industrial tool for discovery and innovation.


In our booth, we are excited to feature an inside look at the research breakthroughs achieved by the COSMOS Supercomputing Team at the University of Cambridge. This team, led by the renowned scientist Stephen Hawking, is driving dramatic advances in cosmology in its studies of cosmic microwave background (CMB) radiation—the relic radiation left over from the Big Bang. Observing the CMB is like looking at a snapshot of the early universe.


The COSMOS team’s demo will showcase some of the new HPC technologies that are part of Intel’s HPC scalable system framework. One of which is Intel® Omni-Path Architecture, a high performance fabric which is designed to deliver the performance required for tomorrow’s HPC workloads and the ability to scale to tens of thousands—and eventually hundreds of thousands—of nodes. This next-generation fabric builds on the Intel® True Scale Fabric with an end-to-end solution, including PCIe* adapters, silicon, switches, cables, and management software. The demo will also be powered by the second generation of the Intel® Xeon® Phi product family, code-named Knights Landing. This is the first time these technologies will be demonstrated publicly in Europe.


Conference participants will also have a chance to collaborate and learn about the latest efforts to modernize industry codes to realize the full potential of the latest advancements in hardware. Come by our collaboration hub in booth #930 or check out for  a wide variety of coding resources for developers.


Intel has been an active participant and sponsor of ISC for many years, reflecting our commitment to working with the broader ecosystem to advance HPC and supercomputing.


I hope to see you at the conference. If you won’t be able to attend the event, you can get a closer look at the work Intel is doing to help push the boundaries of technical computing at

Read more >

Telehealth Proves It’s Good for What Ails Home Healthcare

Telehealth is often touted as a potential cure for much of what ails healthcare today. At Indiana’s Franciscan Visiting Nurse Service (FVNS), a division of Franciscan Alliance, the technology is proving that it really is all that. Since implementing a telehealth program in 2013, FVNS has seen noteworthy improvements in both readmission rates and efficiency.


I recently sat down with Fred Cantor, Manager of Telehealth and Patient Health Coaching at Franciscan, to talk about challenges and opportunities. A former paramedic, emergency room nurse and nursing supervisor, Fred transitioned to his current role in 2015. His interest in technology made involvement in the telehealth program a natural fit.


At any one time, Fred’s staff of three critical care-trained monitoring nurses, three installation technicians and one scheduler is providing care for approximately 1,000 patients. Many live in rural areas with no cell coverage – often up to 90 minutes away from FVNS headquarters in Indianapolis.


Patients who choose to participate in the telehealth program receive tablet computers that run Honeywell LifeStream Manager* remote patient monitoring software. In 30-40 minute training sessions, FVNS equipment installers teach patients to measure their own blood pressure, oxygen, weight and pulse rate. The data is automatically transmitted to LifeStream and, from there, flows seamlessly into Franciscan’s Allscripts™* electronic health record (EHR). Using individual diagnoses and data trends recorded during the first three days of program participation, staff set specific limits for each patient’s data. If transmitted data exceeds these pre-set limits, a monitoring nurse contacts the patient and performs a thorough assessment by phone. When further assistance is needed, the nurse may request a home visit by a field clinician or further orders from the patient’s doctor. These interventions can reduce the need for in-person visits requiring long-distance travel.


FVNS’ telehealth program also provides patient education via LifeStream. For example, a chronic heart failure (CHF) patient experiencing swelling in the lower extremities might receive content on diet changes that could be helpful.




Since the program was implemented, overall readmission rates have been well below national averages. In 2014, the CHF readmission rate was 4.4 percent, compared to a national average of 23 percent. The COPD rate was 5.47 percent, compared to a national average of 17.6 percent, and the CAD/CABG/AMI rate was 2.96 percent, compared to a national average of 18.3 percent.


Despite positive feedback, convincing providers and even some FVNS field staff that, with proper training, patients can collect reliable data has taken some time. The telehealth team is making a concerted effort to engage with patients and staff to encourage increased participation.


After evaluating what type of device would best meet the program’s needs, Franciscan decided on powerful, lightweight tablets. The touch screen devices with video capabilities are easily customizable and can facilitate continued program growth and improvement.


In the evolving FVNS telehealth program, Fred Cantor sees a significant growth opportunity. With knowledge gained from providing the service free to their own patients, FVNS could offer a private-pay package version of the program to hospital systems and accountable care organizations (ACOs).


Is telehealth a panacea? No. Should it be a central component of any plan to reduce readmission rates and improve workflow? Just ask the patients and healthcare professionals at Franciscan VNS.

Read more >

Methods for Seamlessly Migrating to a different Hadoop Version

Fig1.pngOne of the techniques that Intel IT had to learn in order to gain $US351 Million in value from analytics was how to migrate between Hadoop Versions.  Migrating to a different Hadoop Version, whether from an older version or from another distribution, raises a number of questions for any organization that is thinking about attempting it.   Is the migration worth the effort?  Will production software break?  How can we do the transition with minimal impact to users?  Intel’s IT organization faced all of these questions and challenges when it migrated from Intel’s own custom version of Hadoop, known as IDH, to Cloudera’s Hadoop (CDH).  In this white paper, Intel IT’s Hadoop Engineering team describes how their methodology and how they have seamlessly migrated a live production cluster through three different version changes.


The team looked at a feature by feature comparison of Intel’s IDH and Cloudera.  They determined that moving to Cloudera’s Hadoop distribution had significant advantages.  Once the decision was made to migrate, the team outlined three major concerns:


  • Coping with Hadoop variations
  • Understanding the scope of the changes
  • Completing migration in a timely manner


The first concern is about the need to understand how to properly configure the new version.  The second is about the effects of changes – making sure that application developers and internal customers and their code using the cluster would be minimally affected by the change.  The last concern expresses the need to make any migration quick and at best, transparent to live users.



Intel developed what it felt are 6 best practices for migration:


  1. Find Differences with a Comparative Evaluation in a Sandbox Environment
  2. Define Our Strategy for the New Implementation
  3. Upgrade the Hadoop Version
  4. Split the Hardware Environment
  5. Create a Preproduction-to-Production Pipeline
  6. Rebalance the Data


The first practice deals with the first and second concern listed above.  Doing the evaluation identified differences between the IDH and CDH environments without disrupting production environments.     Other practices like creating a production pipeline, were designed to deal with the last concern, migrating quickly and with minimum impact.  Intel divided each version’s instances between servers in the same rack – this leveraged the high speed network within and between racks to move data to the new version with only one transfer.


Using these methods, Intel IT”s Hadoop team completed the full migration from IDH to CDH in 5 weeks.  Only one piece of production code needed to be changed, and that was because it called a library that was deprecated between Hadoop versions. Since the initial migration, this methodology was also used to do 2 version upgrades with no customer impact. Some of the teams’ initial concerns about security have been mitigated with Cloudera’s implementation of Apache Sentry. Look for another white paper on that subject later this year.

Read more >

Innovation North: Opportunity amidst economic struggles


When I came here about two years ago to lead Intel Canada as its new country manager, I was struck by the fact Canada appeared largely untouched by the global economic recession that crippled many world economies.  Having most recently worked in the UK and Western Europe, the recession discussion was pervasive in all business conversations and it drove virtually all investment strategies, causing a reset in the way business was executed by government and the private sector.


I saw a business and public sector that was transformed, paving the way for a new vision of how we can interact.  Just look at the UK public sector’s digital first strategy, which delivers new services first to constituents online. This measure not only cut costs but made government much more efficient.  It was truly transformative.


Then I arrived in Canada. It was like entering a different world, one that had been fortunately – or unfortunately – bypassed the recession and had done a wonderful job of side stepping that whole environment.


You might be wondering why I say “unfortunately” bypassed the recession (one which irreparably damaged so many).  I think perhaps Canada missed an opportunity because it didn’t face the same kinds of economic hardships experienced by other markets. Canada’s GDP was comparatively healthy so it didn’t need to look outside its borders for expansion, nor were business leaders required to focus on boosting productivity and efficiency to survive.


Fast forward to today: The deflated Canadian dollar and economic challenges presented by the dropping oil price are impacting businesses at a time when other markets are getting back to growth.


What my overseas experience has distilled for me is that Canada is at a point today where other countries were a few years ago. We are at an economic inflection point … a point in time where there is fundamental change afoot. During an inflection point, businesses can either see the opportunities to realize significant opportunities for growth and market share gain, or flounder, die and disappear.


Are Canadian companies ready to take advantage?  Canadians have a pervasive culture that is typically more conservative when it comes to making decision, which could be detrimental but the collaborative business environment I’ve seen coast to coast can help companies make the transition successfully, for those willing to take the chance. 


I see tremendous potential for growth in this country in a few fundamental areas:

  • Expand the delivery of online retail solutions.  I am a big believer in bricks and mortar retail but it needs to be complemented by a very comprehensive and rich online experience.  Canada is lagging far behind other markets in the adoption of online retailing but I believe strongly that to succeed in the future retailers will need a multi-channel strategy that includes bricks and mortar, e-commerce, loyalty programs, and new delivery models that integrate in-store pick up with online delivery.  They will also need to leverage big data to provide their customers with more choices and be more responsive to providing the products consumers want to purchase, where and how they want to buy them. I’ve watched with great interest the strides being taken by Canadian Tire which, as a traditional retailer, has embraced the future and is truly transforming their business along these lines.
  • We need a strong commitment at the highest levels to a public sector digital strategy that not only provides greater online access to services (not unlike the UK’s digital first strategy) which can dramatically cut the cost per interaction to access government services.  Hand in glove with this commitment is the need for a national policy to make sure that all Canadians, regardless of location or economic status, can get online and access those services. No one should be digitally excluded.
  • Embracing the cloud is another place Canadian companies are lagging behind globally (and I’m not talking exclusively about public clouds but rather all models from public to private and hybrid models). As the UK emerged from recession, cloud computing showed a rapid build out and I think Canada is poised for a similar growth spurt if companies are willing and able to break out of their traditional mold.
  • The employees of the future will be looking for new tools to be more effective and expand their ability to be mobile.  Companies need to be quicker to adopt new technologies and solutions to drive the knowledge economy of the future, if we want to retain the best and brightest employees.  Millennials are demanding a workplace that is more progressive and will go to companies that provide the flexible and collaborative environments they want.
  • The incubator community in Canada is very impressive and I have been amazed by the collaborative nature of businesses in this country where leaders are willing to meet, share ideas, and work together.  This bodes well for the future but there is a gap when it comes to funding research initiatives and then moving ideas into marketable commercial products. Leadership and commitment are currently lacking but there is an opportunity here once all the pieces are put in place.


There are positive signs Canada is poised for a significant leap forward in the adoption of new technology. Senior IDC Research Analyst Utsav Arora recently told a Mississauga audience that the conventional view that Canada lags 18 months behind the world is out of date. He feels that innovation and a start-up culture have cut the gap in big data adoption to between 6 and 12 months.


When something changes (like our economic climate), you can positively embrace it and use it to your benefit or you can stand back and stare at it while everyone else is benefiting by that change. Businesses can no longer continue to rinse and repeat the way they have done for the last 10 years.


The time – and opportunity – for change is here.  In the immortal words of Winston Churchill, “A pessimist sees the difficulty in every opportunity; an optimist sees the opportunity in every difficulty.”


I believe Canadian companies are in a perfect place to seize opportunity from the difficulties we are currently facing, if they are willing.

Read more >

Are You Realizing the Payoff of Parallel Processing?

By Andrey Vladimirov is head of HPC research at Colfax International



When it comes to high-performance computing, consumers can be divided into three basic user groups. Perhaps the most common and obvious case is the performance-hungry users who crave faster time to insight on complex workloads, cutting throughput times of days down to hours or minutes. Another class of users seeks greater scalability, which is often achieved by adding more compute nodes. Yet another type of user looks for more efficient systems that consume less energy to do a comparable amount of processing work.


Coincidentally, all of these situations can benefit greatly from parallel processing to take greater advantage of the capabilities of today’s multi core processors to improve performance, scalability, and efficiency. And the first step to realizing these gains is to modernize your code.


I will explore the benefits of code modernization momentarily. But first, let’s take a step back and look at the underlying hardware picture.


In the last three decades of the 20th century, processors evolved by increasing clock frequencies. This approach enabled ongoing gains in application performance until processors hit a ceiling—the clock speed of around 3GHz/sec and associated heat dissipation issues.


To gain greater performance, the computing industry moved to parallel processing. Starting in the 1990s, people used distributed frameworks, such as MPI, to spread workloads over multiple compute nodes, which worked on different aspects of a problem in parallel. In the 2000s, multicore processors emerged that allowed parallel processing within a single chip. The degree to which intra-processor parallelism has evolved is very significant, with tens of cores in modern processors. For a case in point, see the Intel® Many Integrated Core Architecture (Intel® MIC Architecture), delivered via Intel® Xeon Phi™ coprocessors.


A simultaneous advance came in the form of vector processing, which adds to each core an arithmetic unit that can apply a single arithmetic operation to a short vector of multiple numbers in parallel. At this point, the math gets pretty interesting. Intel Xeon Phi products are available with up to 61 cores, each of which has 16 vector lanes in single precision. In theory this means that the processor can accelerate throughput on a workload by a factor of 60 x 16—for a 960x gain—in comparison to running the workload on a single processor without vectors (in fact, the correct factor is 2 x 960 because of the dual-issue nature of Knights Corner architecture cores, but this is another story).


And here’s where application modernization enters the picture. To realize gains like this, applications need to be modified to take advantage of the parallel processing and vectorization capabilities in today’s HPC processors. If the application can’t take advantage of these capabilities, you end up paying for performance that you can’t receive.


That said, as Intel processor architectures evolve, you get performance boosts in some areas without doing anything with your code. For instance, such architectural improvements as bigger caches, instruction pipelining, smarter branch prediction, and prefetching improve performance of some applications without any changes in the code. However, parallelism is different. To realize the full potential of the capabilities of multiple cores and vectors, you have to make your application aware of parallelism. That is what code modernization is about: it is the process of adapting applications to new hardware capabilities, especially parallelism on multiple levels.


With some applications, this is a fairly straightforward task. With others it’s a more complex undertaking. The specifics of this part of the discussion are beyond the scope of this post. The big point is that you have to modify your code to get the payoff that comes with a multicore processing platform with built-in vectorization capabilities.


As for that payoff, it can be dramatic. This graphic shows the gains made when the same C++ code was optimized to take advantage of the capabilities of the Intel platforms. These are results from benchmarks conducted on a real-world astrophysical application called HEATCODE. Review the study.




Here’s another example of the payoff that comes with code modernization. This graphic illustrates the importance of parallelism and optimization on a synthetic N-body application designed as an educational “toy model.” Review the example.




As these examples show, when code is modernized to take full advantage of today’s HPC hardware platforms, the payoff can be enormous. That certainly applies to general-purpose multi-core processors, such as Intel Xeon CPUs. However, on top of that, for applications that know how to use multiple cores, vectors, and memory efficiently, specialized parallel processors, such as Intel Xeon Phi coprocessors, can further increase performance and lower power consumption by a factor of up to 3x. For details, see this performance-per-dollar and performance-per-watt study.


Intel Xeon Phi coprocessors build on the capabilities of the Intel Xeon platform, which is used in servers around the world. General-purpose Intel Xeon processors are available with up to 18 cores per processor chip, or 36 cores in a dual-socket configuration. These processors are already highly parallel. Intel Xeon Phi coprocessors take the architecture to a new, massively parallel level with up to 61 cores per chip.


A great thing about the Intel Xeon Phi architecture is that code written for the Intel Xeon platform can run unmodified on the Intel Xeon Phi coprocessor, as well as general-purpose CPU platforms. But there’s a catch: if the code isn’t modernized, it can’t take advantage of all of the capabilities of the Intel MIC Architecture used in the Intel Xeon Phi coprocessor. This makes code modernization essential.


Once you have a robust version of code, you are basically future-ready. You shouldn’t have to make major modifications to take advantage of new generations of the Intel architecture. Just like in the past, when computing applications could “ride the wave” of increasing clock frequencies, your modernized code will be able to automatically take advantage of the ever-increasing parallelism in future x86-based computing platforms.


At Colfax Research, these topics are close to our hearts. We make it our business to teach parallel programming and optimization, including programming for Intel Xeon Phi coprocessors, and we provide consulting services on code modernization.


We keep in close contact with the experts at Intel to stay on top of the current and upcoming technology. For instance, we started working with the Intel Xeon Phi platform early on, well before its public launch. We have since written a book on parallel programming and optimization with Intel Xeon Phi coprocessors, which we use as a basis for training software developers and programmers who want to take full advantage of the capabilities of Intel’s parallel platforms.


For a deeper dive into code modernization opportunities and challenges, explore our Colfax Research site. This site offers a wide range of technical resources, including research publications, tutorials, case studies, and videos of presentations. And should you be ready for a hands-on experience, check out the Colfax training series, which offers software developer trainings in parallel programming using Intel Xeon processors and Intel Xeon Phi coprocessors.



Intel, the Intel logo, Xeon, and Xeon Phi are trademarks of Intel Corporation in the United States and other countries. * Other names and brands may be claimed as the property of others.

©2015 Colfax International, All rights reserved

Read more >

10 Mobile BI Strategy Questions: Security


Do you have all three layers of mobile BI security covered: device, app, and data? All of the convenience and benefits of mobile devices provide a particular security risk, complicating matters for the technology managers. When we think about the three layers of security in mobile BI, each layer plays an equally important role.


Moreover, each layer represents a specific component of a user’s access profile. Therefore, it’s vital not only to understand how each layer completes the security picture, but also to make sure they work in tandem.


1. Mobile Device Security


This outer layer deals with protecting the mobile device, whether it’s issued by the business or allowed under a bring-your-own-device (BYOD) arrangement. The security objective is to secure corporate data assets with a comprehensive enterprise mobility solution. Such a solution would enable IT departments to have anytime/anywhere control over all of the deployed devices as well as their applications. For example, administrative options would include the ability to remotely lock and wipe devices when an employee reports a lost business phone or tablet. This approach to mobile device security can’t exist separately from, or independent of, the organization’s enterprise mobility strategy, especially for compliance reasons.


2. Mobile BI App Security


The middle layer includes the mobile BI app. The security at this layer can be linked to device security (for example, the employee profile may dictate the availability of a particular app). However, the app security can also be established in addition to and/or independent of the device security. For example, we may need to unlock the tablet first, and then unlock the mobile BI app with a password before anything else can be done. If the mobile BI software is purchased, this functionality will be dictated by the vendor. Mobile BI solutions built in-house will have a greater degree of flexibility to customize this option.


Mobile BI app security plays a critical role because it provides a secondary layer, similar to a pin on calling cards or remote access devices. If the mobile device is lost or stolen, it helps protect the downloaded information on the app. This becomes especially critical if the mobile BI app has an offline functionality, which allows full access to downloaded data without any Wi-Fi connectivity, including during use in airplane mode. As a result, this app layer security provides safeguarding of not only data in mobile BI reports but also connection profiles such as server names, etc.



3. Report and Data Security


This third and final layer usually inherits its rules from the underlying BI platform. Generally, two components make up this layer: mobile BI asset (report) vs. data content displayed on the report. The mobile BI asset component determines which specific reports or dashboards a user is allowed to see. For example, this could help to separate forecast dashboards for sales from profit and loss reports for finance.


On the other hand, the data content component dictates what a user will see when they access the report. For example, a sales manager who’s responsible for the U.S. operation might see only the U.S. data, whereas a manager from Europe might see only European data. The combination of asset and data security allows for the management of different mobile BI assets, and helps to serve different user groups with different needs for access and security.


Bottom Line: Security is critical.


Enterprise data requires the same degree of protection as other corporate assets. Mobile BI is no different. When we’re developing a mobile BI strategy, we need to consider all of the three layers of security. But we also must take into account the challenges we may face due to lack of standards and the integration of these layers at the enterprise level.


Security is critical for mobile BI because mobility offers an unmatched convenience that comes at a risk. We can’t afford to jeopardize one of our most strategic assets – enterprise data – as business transactions become ever more digital and connected. Instead, we want to use mobile BI to drive growth and profitability.


Which of the three layers of security in mobile BI do you find most challenging to manage?


Stay tuned for my next blog in the Mobile BI Strategy series.


Connect with me on Twitter at @KaanTurnali and LinkedIn.


This story originally appeared on the SAP Analytics Blog.

Read more >

With STEM Education, Women Can Create Both Technology and Their Own Futures

The decline of young women’s participation in science, technology, engineering and math (STEM) education in the U.S. is alarming for many reasons, but it’s especially resonant for someone like me, whose cultural heritage is Indian. (Watch this short video, where I discuss these topics in person).


After India gained independence in 1947, it looked to the U.S. for some of its educational models. India built technology institutes based on MIT and CalTech, and engineering and scientific education was strongly emphasized for both girls and boys. At the time, the U.S. represented a high standard in technical and scientific education for both young boys and girls. As a child, I was encouraged to pursue a STEM-based education, which is how I found my way to Intel.


The landscape has changed greatly for the U.S. in STEM education. In 2015, Intel President Renée James addressed a meeting at Portland’s St. Mary’s Academy where she highlighted some sobering statistics. When Renée first started working at Intel in 1987, the percentage of women pursuing computer science degrees in U.S. universities was 37 percent. By 2010, it had dropped more than half, to just 14 percent. Now, women make up 57 percent of the U.S. undergraduate population, though they represent only 19 percent of U.S. graduates in engineering. By contrast, in China—where STEM is emphasized from an early age—women today make up 40 percent of the engineering workforce.


The decline of young women’s involvement in STEM education in the U.S. is troubling on many levels. According to Wired, by 2018, the U.S. STEM workforce will be 8.6 million jobs. It’s the fastest growing sector in the U.S. and yet there is a deficit in individuals, particularly women, pursuing these fields. Jobs in technology, engineering, science and related fields are typically intellectually satisfying and well paying, providing women with a path to prosperity and security for themselves and their families.


The relative scarcity of women in technical and engineering fields also skews the diversity of workplaces, which in turn impacts innovation and can impact business value. According to a study by Forbes, workplace diversity is a key driver of innovation, especially in global workforces and economies. Many organizations recognize that a broad set of experiences, perspectives, and backgrounds are crucial for the development of new ideas. A diverse and inclusive workforce is also important for companies that want to attract and retain top talent. According to a University of Maryland and Columbia Business School joint study, gender diversity at the management level leads to a $42 million increase in value for S&P firms.


Empowering Girls and Women

I’m proud to be part of an organization like Intel, which is committed to expanding educational opportunities for girls and women and inspires them to become creators of technology. Intel has many globally expansive programs and initiatives to support educational access for girls and women in STEM fields.


  • Girl Rising, a film and global social action campaign for girls’ education, has cumulatively reached over 200 million people, with more than 10,000 film screenings, five billion social media impressions, and 500 published articles. As part of the campaign, Intel employees have participated in more than 100 volunteer and screening events in over 30 countries.
  • Intel has worked with UNESCO to develop a new gender policy brief and toolkit to guide policymakers around the world toward gender equality in education and technology access.
  • Girls Who Code and the National Center for Women and Information Technology Aspire IT are two examples of programs and organizations Intel supports that help to increase women’s participation in computing and technology. Intel and the Intel Foundation also support other programs designed to inspire and engage girls in technology and engineering fields such as the Intel International Science and Engineering Fair and the Intel Science Talent Search, both of which attract a high level of participation by girls.
  • The Intel® She Will Connect program, benefiting five million women in Sub-Saharan Africa, combines digital literacy training, online peer networks and content to help young women acquire or improve digital literacy and connect to new opportunities for economic prosperity and personal growth.


For Girls and Women, the Time is Now

Despite statistics about declining participation in STEM education, today is a great time for women of all ages, in all countries, to explore opportunities in technology and engineering fields. A great deal of passion and energy is devoted to opening the door to women in technical fields, and if this is where their passions lie, girls and women can seize the opportunity to improve their quality of life and advance their families and communities.


With an abundance of programs and incentives in place, girls and women have the wind at their backs as they strive to become not just creators of technology, but of their own lives and futures.


Please follow @PrabhaGana for ongoing conversations about women in technology and #STEMinism.


Read more >

Change Your Desktops, Change Your Business. Part 4: Leverage the Newest Technology



Do you have a smartphone? These days, chances are pretty good that you do. So, that means using touch has probably become pretty normal for you: It’s natural, easy, and fast. Well, it’s probably no surprise that businesses are increasingly seeing the upside of bringing those same benefits to their business desktop PCs.


It’s really all about being able to work in the way that makes the most sense for you. With touchscreen displays, people can closely interact with web pages, images, videos, PDFs—all kinds of content. But then they can switch to the keyboard and mouse for typing and other tasks best suited to that interface. I think that makes a lot of sense.


The study we’ve been addressing during this series on desktops found that a touch-enabled display added about $186 to the starting price of an All-in-One PC.1 Is it worth it? Here’s a good way to look at it: If that touchscreen can lead to even one minute of additional productivity, which doesn’t seem like a stretch, it could pay for itself in under 20 months.2,3 For even more examples of how the power of touch has revolutionized desktop computing, check out the infographic here.


But touch is just one of the many innovations available to businesses today. Many companies, for example, are replacing their work PCs so that they can take advantage of the latest USB technology. The difference comes down to speed. The USB 3.0 ports in the latest All-in-One PCs and Mini Desktops offer transfer rates up to 10X faster than the USB 2.0 ports in your aging legacy desktop towers.


Then there’s DisplayPort, which is available on All-in-One PCs and Mini Desktops. Tests confirmed that it can support higher performance and lower power monitor displays than legacy systems. Plus, the DisplayPort, or HDMI (also available on the new desktop PCs), can enable you to add a second display.


New wireless technology is also available with the new All-in-One PCs and Mini Desktops in the form of Dual Band Wireless-AC 7260 cards. But they’re not available in those older desktop towers. That means more flexibility for your employees because you don’t have to worry about including an Ethernet port and cable for each desktop.


And lastly, all of that technological brilliance now comes in a significantly smaller package. To be more specific, the study found that today’s All-in-One PCs and Mini Desktops save you 59 and 60 percent, respectively, in workspace inches compared to legacy desktops.


The moral of the story, and the point of this desktop blog series, is that moving from your aging desktop fleet to newer All-in-One PCs and Mini Desktops can make a real difference for your business. From improved performance and lower energy costs, to greater IT effectiveness and access to the newest technology. So you have to ask: How could the changing my desktops change my business? Join the conversation using #IntelDesktop.


This is the fourth and final installment of the “Change Your Desktops, Change Your Business” series in the Desktop World Tech Innovation Series. To view the other posts in the series, click here: Desktop World Series.


1. The starting price ($1,598) + three-year ProSupport Service brought the price to $1,677.57 for non-touch-enabled display with Intel Core i5 process and 8 GB memory for the All-in-One PC. The starting price (including three-year ProSupport Service) for the All-in-One PC with the same processor and memory and touch-enabled display was $1,863.29 via on 12/19/2014


2. A minute a day value at $9.72 ($350/36), per month could provide payback for a $186 cost in 19.1 months.


3. Note: We tested the All-in-One PC installed with Windows 7 so that system configuration matched closely to the legacy desktop tower. The All-in-One PC and Mini Desktop are available with either Windows 7 or Windows 8.1.

Read more >

How Free Wi-Fi Can Transform the Patient Experience in the NHS

I’m often reminded that within the health IT sector we overlook some of the more simple opportunities to provide a better healthcare experience for both clinical staff and patients. A great example of this was the news that the NHS is investigating the feasibility of providing free Wi-Fi across its estate which it estimates will ‘help reduce the administrative burden currently estimated to take up to 70 percent of a junior doctor’s day‘. I’ll cover the often-talked about benefits to clinicians in a later blog but here I want to focus on how access to free Wi-Fi could impact the patient in a myriad of positive ways.


Today many of us see access to the internet via Wi-Fi just like any other utility. It’s not something we think of too deeply but we expect it to be there, all day, every day. But access to Wi-Fi in an NHS hospital can either come at a price or is not available at all. The vision put forward by Tim Kelsey, NHS England’s National Director for Patients and Information, could truly revolutionise the continuum of care experience and fundamentally change the relationship between patient/family and hospital. I’ve highlighted five of the main benefits below:


1. Enhances Education

Clinicians will say that a better informed patient is more likely to buy in to their treatment plan. Traditionally an inpatient will be delivered updates on their condition verbally by a doctor ‘doing the rounds’ once or twice per day at the bedside. With the availability of free Wi-Fi in hospitals and the much-anticipated electronic patient access to all NHS funded services by 2020, I anticipate a patient being able to simply log-in to see real-time updates about their condition at any time of the day via their electronic health record. And Wi-Fi may offer opportunities to provide access to online educational material approved by the NHS too.  I would add a cautionary note here though around the differing levels of interpretation of medical data by clinicians and patients.


2. Connecting Families

A prolonged stay in hospital affects not just the patient but the wider family too. Free Wi-Fi changes what can sometimes be a lonely and isolated period for the patient by bringing the family ‘to the bedside’ outside of traditional visiting hours through technologies such as Skype or email. And those conversations may well include patient progress updates thus reducing the strain on nurses who, at times, provide updates over the telephone. Additionally, family will be able to spend more time visiting patients while still being able to work remotely using free Wi-Fi.


3. Future Wearables

As the Internet of Things in healthcare becomes more commonplace we’re likely to see increasing examples of how wearable technology can be used to not only monitor patients in the home but in a clinical setting too. Tim Kelsey used the example of patients with diabetes, 1/5th of whom will have experienced an avoidable hypoglycaemic episode while in hospital. Using sensor technology connected to Wi-Fi will help minimise these incidents and ensure patients do not experience additional (and avoidable) complications during their stay in hospital. Again, the upside to the healthcare provider is a reduction in the cost of providing care.


4. Happier Patients

Talk to patients (young or old) that have spent an extended time in hospital and they will more often than not tell you that at times they felt a drop in morale due to having their regular routine significantly disrupted. By offering free Wi-Fi patients can use their own mobile devices to pull back and continue to enjoy some of those everyday activities that go a long way to making all of us happy. That might include watching a favourite TV programme, reading a daily newspaper or simply playing an online game. Being connected brings a sense of normality to what is undoubtedly a period of worry and concern, resulting in happier patients.


5. Reducing Readmissions

When we look at the team of people providing care for patients it’s easy to forget just how important family and friends are, albeit in a less formal way than clinicians. When it comes to reducing readmission my mind is drawn to the patient setting immediately after discharge from hospital where it’s likely that family and close friends will be primary carers when the patient returns home. I’m seeing a scenario whereby the patient and caregiver in a hospital connect to family members, using Skype via Wi-Fi for example, to talk through recovery and medication to help ease and increase the effectiveness of that transition from hospital to home. I believe this could have a significant impact on readmission rates in a very positive way.


Meeting Security Needs

Wi-Fi networks in a hospital setting will, of course, bring concerns around security, especially when we talk of accessing sensitive healthcare data. This should not stop progress though as there are innovative security safeguards created by Intel Security Group that can mitigate the risks associated with data transiting across both public and private cloud-based networks. And I envisage healthcare workers and patients will access separate Wi-Fi networks which offer enhanced levels of security to clinicians.


Vision to Reality

Currently there are more than 100 NHS hospitals providing Wi-Fi to patients, in some cases free and in others on a paid-for basis. What really needs to happen though to turn this vision of free Wi-Fi for all into a reality? There are obvious financial implications but I think there are great arguments for investment too, especially when you look at the clinical benefits and potential cost-savings. A robust and clear strategy for implementation and ongoing support will be vital to delivery and may well form part of the NHS feasibility study. I look forward to seeing the report and, hopefully, roll-out of free Wi-Fi across the NHS to provide an improved patient experience.


If you enjoyed this blog please drop your details here to receive our quarterly newsletter for more insight and comment on the latest Health IT issues.


Chris Gough is a lead solutions architect in the Intel Health & Life Sciences Group and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@CGoughPDX)

Check out his previous posts

Read more >

University Health Looks to Cut TCO for Its Epic and Caché Infrastructure by 40 Percent

By Steve Leibforth, Strategic Relationship Manager, Intel Corporation


How sustainable is your health IT environment? With all the demands you’re putting on your healthcare databases, is your infrastructure as reliable and affordable as it needs to be so you can stay ahead of the rising demand for services?


In Louisiana, IT leaders at one of the health systems we’ve been working with ran the numbers. Then, they migrated their InterSystems Caché database from their previous RISC platforms onto Dell servers based on the Intel® Xeon® processor E7. They tell us they couldn’t be happier—and they’re expecting the move to help them reduce TCO for their Epic EHR and Caché environment by more than 40 percent.


“Using Intel® and Dell hardware with Linux and VMware, you can provide a level of reliability that’s better than or equal to anything out there,” says Gregory Blanchard, executive director of IT at Shreveport-based University Health (UH) System. “You can do it more easily and at much lower cost. It’s going to make your life a lot easier. The benefits are so clear-cut, I would question how you could make the decision any differently.”


UH Photo.jpg


We recently completed a case study describing UH’s decision to migrate its Caché infrastructure. We talked with UH’s IT leaders about their previous pain points, the benefits they’re seeing from the move, and any advice they can share with their health IT peers. If your health system is focused on improving services while controlling costs, I think you’ll find it well worth a read. You’ll also learn about the Dell, Red Hat, Intel, and VMware for Epic (DRIVE) Center of Excellence—a great resource for UH and other organizations that want a smooth migration for their Epic and Caché deployments.  




UH is a great reminder that health IT innovation doesn’t just happen at the Cleveland Clinics and Kaiser Permanentes of the world. Louisiana has some of the saddest health statistics in the nation, and the leaders at UH know they need to think big if they’re going to change that picture. As a major medical resource for northwest Louisiana and the teaching hospital for the Louisiana State University Shreveport School of Medicine, UH is on the forefront of the state’s efforts to improve the health of its citizens. Its new infrastructure—with Intel Inside®—gives UH a scalable, affordable, and sustainable foundation. I’ll be excited to watch their progress.


Read the case study and tell  me what you think.

Read a whitepaper about scaling Epic workloads with the Intel® Xeon® processor E7 v3.

Join and participate in the Intel Health and Life Sciences Community

Follow us on Twitter: @IntelHealth, @IntelITCenter

Read more >

Amplify Your Value: Draw Your Own Map!

Amplify Your Value (5).png

The day was blistering hot! The air did not move. It was stifling hot. The crowd gathering in this Kansas field struggled to find shade, several people stood in the shadows of the tall grasses surrounding the field. Sweat poured off of me, even though I was standing still. I didn’t even want to fan myself because that would be too much exertion.

It was July 4th, 2004. We were standing in this field, nearing heat stroke, to commemorate the 200th anniversary of the Lewis and Clark Expedition passing through this area. (Yes, in addition to loving IT, I love history! I am SUCH a nerd!) Ok, I can hear you, “What does THIS have to do with amplifying your value, much less IT? I am pretty sure they didn’t have computers in 1804!” Bear with me for a few more paragraphs, dear IT explorer…

After standing through several speeches and re-enactments, we piled back into busses for the ride back to Atchison. Hey, at least on the bus, we could put the windows down and get a breeze…but we WERE packed in like sardines.

We poured our of the bus and headed straight to a local bar and grill for lunch, and a COLD ONE…or two…or three. And then, the lesson…there it was…posters everywhere…we had to try one…Boulevard Brewing Co., a sponsor of the Lewis & Clark Event…the slogan…”To those who make maps, not follow them”. Think about that….”To Those Who Make Maps, Not Follow Them”. That is the definition of explorer 200+ years ago, they would literally “step off the map”, going where no white man had ever gone before.

If you’ve been following our journey to Amplify Your Value, we have looked inward to see where we were; we looked forward to envision the future; we studied our business and identified the impacts it had on our organization; and, we decided we wanted to do value-add projects to the best of our abilities. Like Lewis and Clark, we were then stepping into the unknown. There was no map for where we were going. We were blazing trails.

It may seem odd to say in this series of Amplify Your Value but, what I am outlining here is NOT a roadmap for you to follow. What I am outlining here is what worked for us…it may not work for you. You have to follow the steps of introspection that we have discussed over the last several posts, but your roadmap will probably differ greatly from ours…that is OK and to be expected. Businesses are different, cultures are different, environments are different. The point here is, if you have followed the steps, you now know where you are and you now know where you are going.


Like Lewis and Clark 200 some years ago, we had a goal, we had an objective. To draw our map, we identified immediate steps we needed to take. Lewis and Clark needed specific skillsets, they need discipline, they needed teamwork and collaboration. We needed process, we needed education, we needed different skills, we needed a deeper understanding of our mission. We identified many of the steps we needed to take on our journey. Like good IT professionals, we identified dependencies and precursors to our journey. We laid out a five year plan.

Five years…that is eons in the IT world. Perhaps it was too much of a chunk to bite off from where we were. The first year or two were very specific. We had processes we wanted to implement, we had technologies we wanted to implement, and we had an ever evolving business that we needed to support and…lead. Like Lewis and Clark before us, we did not bind ourselves to specifics in a future we could not foresee. Had they not been open and flexible they never could have travelled to the Pacific Ocean and returned to “civilization”. We laid out specific steps in the near term and specific goals in the long term. Each year we review our plan and we adjust our steps, we do not adjust our strategy nor our vision.

I have to admit, some of the pieces of our journey fell into place…sometimes it is better be lucky than good…we were able to invest in new Disaster Recovery technology as a “big bang” because our prior investments all hit their depreciation at the same time. We were able to migrate our production environment as a “big bang” for pretty much the same reason. However, there were many times on our journey that we had to adjust, to adlib, to step off the map we had drawn. As you learn more, as you experience more, you need to be flexible and adjust your tactics to meet your objectives.

Your journey will not be the same as ours. Your company is different, your culture is different, technology is different. You have to be willing to step into the unknown, you have to be willing to draw your own map. You have to be willing to keep your focus on the mission and the destination; and adjust your plan to reach that destination. Make Your Own Map!

Next time we will explore our first step into the unknown. For us, that was paying attention to the “Google Whisperer”, for you, it may be paying attention to a different muse.

The series, “Amplify Your Value” explores our five year plan to move from an ad hoc reactionary IT department to a Value-add revenue generating partner. #AmplifyYourValue

We could not have made this journey without the support of several partners, including, but not limited to: Bluelock, Level 3 (TWTelecom), Lifeline Data Centers, Netfor, and CDW. (mentions of partner companies should be considered my personal endorsement based on our experience and on our projects and should NOT be considered an endorsement by my company or its affiliates).

Jeffrey Ton is the SVP of Corporate Connectivity and Chief Information Officer for Goodwill Industries of Central Indiana, providing vision and leadership in the continued development and implementation of the enterprise-wide information technology and marketing portfolios, including applications, information & data management, infrastructure, security and telecommunications.

Find him on LinkedIn.

Follow him on Twitter (@jtongici)

Add him to your circles on Google+

Check out more of his posts on Intel’s IT Peer Network

Read more from Jeff on Rivers of Thought

Read more >