Recent Blog Posts

Intel’s Purchasing Power Drives EHR Interoperability

Brian DeVore, Director Healthcare Strategy and Ecosystem Intel Corporation Prashant Shah, Health IT Architect Intel Corporation Alice Borrelli, Director Global Healthcare Policy Intel Corporation As Meaningful Use Stage 3 comments are being filed this week, we’d like to highlight another … Read more >

The post Intel’s Purchasing Power Drives EHR Interoperability appeared first on Policy@Intel.

Read more >

Telehealth Proves It’s Good for What Ails Home Healthcare

Telehealth is often touted as a potential cure for much of what ails healthcare today. At Indiana’s Franciscan Visiting Nurse Service (FVNS), a division of Franciscan Alliance, the technology is proving that it really is all that. Since implementing a telehealth program in 2013, FVNS has seen noteworthy improvements in both readmission rates and efficiency.

I recently sat down with Fred Cantor, Manager of Telehealth and Patient Health Coaching at Franciscan, to talk about challenges and opportunities. A former paramedic, emergency room nurse and nursing supervisor, Fred transitioned to his current role in 2015. His interest in technology made involvement in the telehealth program a natural fit.

At any one time, Fred’s staff of three critical care-trained monitoring nurses, three installation technicians and one scheduler is providing care for approximately 1,000 patients. Many live in rural areas with no cell coverage – often up to 90 minutes away from FVNS headquarters in Indianapolis.

Patients who choose to participate in the telehealth program receive tablet computers that run Honeywell LifeStream Manager* remote patient monitoring software. In 30-40 minute training sessions, FVNS equipment installers teach patients to measure their own blood pressure, oxygen, weight and pulse rate. The data is automatically transmitted to LifeStream and, from there, flows seamlessly into Franciscan’s Allscripts™* electronic health record (EHR). Using individual diagnoses and data trends recorded during the first three days of program participation, staff set specific limits for each patient’s data. If transmitted data exceeds these pre-set limits, a monitoring nurse contacts the patient and performs a thorough assessment by phone. When further assistance is needed, the nurse may request a home visit by a field clinician or further orders from the patient’s doctor. These interventions can reduce the need for in-person visits requiring long-distance travel.

FVNS’ telehealth program also provides patient education via LifeStream. For example, a chronic heart failure (CHF) patient experiencing swelling in the lower extremities might receive content on diet changes that could be helpful.

Since the program was implemented, overall readmission rates have been well below national averages. In 2014, the CHF readmission rate was 4.4%, compared to a national average of 23%. The COPD rate was 5.47%, compared to a national average of 17.6%, and the CAD/CABG/AMI rate was 2.96%, compared to a national average of 18.3%.

Despite positive feedback, medical staff resistance remains the biggest hurdle to telehealth adoption.  Convincing providers and even some field staff that, with proper training, patients can collect reliable data has proven to be a challenge. The telehealth team is making a concerted effort to engage with patients and staff to encourage increased participation.

After evaluating what type of device would best meet the program’s needs, Franciscan decided on powerful, lightweight tablets. The touch screen devices with video capabilities are easily customizable and can facilitate continued program growth and improvement.

In the evolving FVNS telehealth program, Fred Cantor sees a significant growth opportunity. With knowledge gained from providing the service free to their own patients, FVNS could offer a private-pay package version of the program to hospital systems and accountable care organizations (ACOs).

Is telehealth a panacea? No. Should it be a central component of any plan to reduce readmission rates and improve workflow? Just ask the patients and healthcare professionals at Franciscan VNS.


Read more >

CitizenM Moves Towards the Hotel of the Future

I recently had an opportunity to discover how the citizenM* hotel in Amsterdam is pioneering a move towards the new digital hotel; using mobile devices to create better customer experiences, and consolidated analytics to better understand customers.


Its aim is to tackle a significant challenge in the hospitality industry: personalizing the service to the needs of each guest. The answer, it believes, is to restructure all the disparate systems it previously had for storing guest data with a single guest profile. As a result, the hotel has the potential, in the future, to be able to remember your preferences for room temperature and TV channel. They’ll even know how you like your coffee.


Employees will have a tailored dashboard that provides the information they need for their role, such as whether a room is empty (for the cleaners) or what the guest’s favorite tipple is (for the bar staff).

Consumerization in Hotel Technology


The hospitality industry has often used tablets for employees, but a novel twist at citizenM is that each room will have a tablet guests can use to control the TV, windows, radio and lights from a single device.


This is one more example of how hotels worldwide are exploring ways they can streamline the check-in process to improve the guest experience. Starwood is planning to use smartphones or tablets for digital room keys, and Nine Zero is offering iris-scanning for access to the penthouse suite, for example.

Using the Cloud to Synchronize Data


The data will be stored in a Microsoft Azure* cloud using servers based on the Intel® Xeon® processor, with a local server used for backup should internet connection drop. The idea is to use the cloud to synchronize customer data between hotels, so a hotel in London could remember your preferences from Amsterdam, for example.

Sharing Data on the Service Bus with IreckonU


The solution is called IreckonU* and was developed by Dutch software company Ireckon. IreckonU* is built on a highly scalable base layer, which consists of a service bus and middleware containing business-specific logic. All the systems plug into the service bus, from the website booking systems to the minibar, so they can all communicate effectively with each other. Using this architecture, there are none of the maintenance and support headaches usually associated with point-to-point integration because each application just needs to be connected to the bus. At the same time, all the applications will be able to access the full guest profile, and update it to ensure there is a complete picture of the guest. The solution includes several standard building blocks. They enable the hotel to:


  • Create dashboards for employees, from the CEO to the housekeeper.
  • Optimize reservation flows, room availability and status, housekeeping, payment and client interaction.
  • Provide a personalized guest experience in the hotel and with external services such as flight data.

Future Possibilities


The ireckonU also provides hotel and hospitality brands with a whole range of additional features out-of-the-box so expect to see some of these services starting to appear in hotels in the not-too-distant future:


  • If you’re feeling peckish, you’ll be able to order room service through your tablet.
  • If you prefer, you will be able to use your own tablet instead of the one provided to control the room. To ensure a great performance, each room will have its own private WiFi network too.
  • You’ll be able to set your alarm according to your flight time, and the system will be able to let you sleep in later and reschedule your taxi if your flight is delayed. Now that’s what I call service!
  • You’ll even be able to use your phone to check in, and unlock your room door using an app. This will avoid any delays at the reception desk on arrival, and will spare guests the need to carry a separate key with them.


Watch the video to see the whole thing through the eyes of a guest. It provides a real insight into how hotel brands like citizenM and software company ireckon are approaching the challenges in today’s hospitality industry. I’d love to hear your thoughts on the hotel of the future in the comments below.


*Other names and brands may be claimed as the property of others.


Jane Williams

Find me on LinkedIn
Keep up with me on Twitter

Check out my previous posts

Read more >

INOX Communication Boosts the Shopping Experience With Robust Intel IoT Retail Solutions

The rapid evolution of retail, from digital signage to virtual shopping, wouldn’t be possible without collaboration, optimism, and futuristic IoT envisioning. In this guest blog post, INOX Communication Founder and CEO Lats Kladny, reveals how envisioning the future of the … Read more >

The post INOX Communication Boosts the Shopping Experience With Robust Intel IoT Retail Solutions appeared first on IoT@Intel.

Read more >

Top Tweets – 3D Avatars, Fast JavaScript and Twitter for Developers

Follow Gael on Twitter: @GaelHof Subscribe to the IntelDeveloperZone Subreddit My Google + profile Check out my Slideshare profile I was just going through my Twitter analytics and thought I would share what topics my followers are the most interested in.  While I … Read more >

The post Top Tweets – 3D Avatars, Fast JavaScript and Twitter for Developers appeared first on Intel Software and Services.

Read more >

Where Are You on the Road to Software-Defined Infrastructure?

In enterprise IT and service provider environments these days, you’re likely to hear lots of discussion about software-defined infrastructure. In one way or another, everybody now seems to understand that IT is moving into the era of SDI.


There are good reasons for this transformation, of course. SDI architectures enable new levels of IT agility and efficiency. When everything is managed and orchestrated in software, IT resources— including compute, storage, and networking—can be provisioned on demand and automated to meet service-level agreements and the demands of a dynamic


For most organizations, the question isn’t, “Should we move to SDI?” It’s, “How do we get there?” In a previous post, I explored this topic in terms of a high road that uses prepackaged SDI solutions, a low road that relies on build-it-yourself strategies, and a middle road that blends the two approaches together.


In this post, I will offer up a maturity-model framework for evaluating where you are in your journey to SDI. This maturity model has five stages in the progression from traditional hard-wired architecture to software-defined infrastructure. Let’s walk through these stages.




At this stage of maturity, the IT organization has standardized and consolidated servers, storage systems, and networking devices. Standardization is an essential building block for all that follows. Most organizations are already here.




By now, most organizations have leveraged virtualization in their server environments. While enabling high level of consolidation and greater utilization of physical resources, server virtualization accelerates service deployment and facilitates workload optimization. The next step is to virtualize storage and networking resources to achieve similar gains.




At this stage, IT resources are pooled and provisioned in an automated manner. In a step toward a cloud-like model, automation tools enable the creation of self-service provisioning portals—for example, to allow a development and test team to provision its own infrastructure and to move closer to a frictionless IT organization.




At this higher stage of IT maturity, an orchestration engine optimizes the allocation of data center resources. It collects hardware platform telemetry data and uses that information to place applications on the best servers, with features that enable acceleration of the workloads, located in approved locations for optimal performance and the assigned levels of trust. The orchestration engine acts as an IT watchdog that spots performance issues and takes remedial actions—and then learns from the events to continue to meet or exceed the customer’s needs.


SLA Managed


At this ultimate stage—the stage of the real-time enterprise—an organization uses IT service management software to maintain targeted service levels for each application in a holistic manner. Resources are automatically assigned to applications to maintain SLA compliance without manual intervention. The SDI environment makes sure the application gets the infrastructure it needs for optimal performance and compliance with the policies that govern it.


In subsequent posts, I will take a closer look at the Automated, Orchestrated, and SLA Managed stages. For now, the key is to understand where your organization falls in the SDI maturity model and what challenges need to be solved in order to take this journey. This understanding lays the groundwork for the development of strategies that move your data center closer to SDI—and the data center of the future.

Read more >

Blurred Boundaries: Hidden Data Center Savings

Every disruptive technology in the data center forces IT teams to rethink the related practices and approaches. Virtualization, for example, led to new resource provisioning practices and service delivery models.


Cloud technologies and services are driving similar change. Data center managers have many choices for service delivery, and workloads can be more easily shifted between the available compute resources distributed across both private and public data centers.


Among the benefits stemming from this agility, new approaches for lowering data center energy costs have many organizations considering cloud alternatives.


Shifting Workloads to Lower Energy Costs


Every data center service and resource has an associated power and cooling cost. Energy, therefore, should be a factor in capacity planning and service deployment decisions. But many companies do not leverage all of the energy-related data available to them – and without this knowledge, it’s challenging to make sense of information being generated by servers, power distribution, airflow and cooling units and other smart equipment.


That’s why holistic energy management is essential to optimizing power usage across the data center. IT and facilities can rely more on user-friendly consoles to gain a complete picture of the patterns that correlate workloads and activity levels to power consumption and dissipated heat like graphical thermal and power maps of the data center. Specific services and workloads can also be profiled, and logged data helps build a historical database to establish and analyze temperature patterns. Having one cohesive view of energy consumption also reduces the need to rely on less accurate theoretical models, manufacturer specifications or manual measurements that are time consuming and quickly out of date.


A Case for Cloud Computing


This makes the case for cloud computing as a means to manage energy costs. Knowing how workload shifting will decrease the energy requirements for one site and increase them for another makes it possible to factor in the different utility rates and implement the most energy-efficient scheduling. Within a private cloud, workloads can be mapped to available resources at the location with the lowest energy rates at the time of the service request. Public cloud services can be considered, with the cost comparison taking into account the change to the in-house energy costs.


From a technology standpoint, any company can achieve this level of visibility and use it to take advantage of the cheapest energy rates for the various data center sites. Almost every data center is tied to at least one other site for disaster recovery, and distributed data centers are common for a variety of reasons. Add to this scenario all of the domestic and offshore regions where Infrastructure-as-a-Service is booming, and businesses have the opportunity to tap into global compute resources that leverage lower-cost power and in areas where infrastructure providers can pass through cost savings from government subsidies.


Other Benefits of Fine-Grained Visibility


For the workloads that remain in the company’s data centers, increased visibility also arms data center managers with knowledge that can drive down the associated energy costs. Energy management solutions, especially those that include at-a-glance dashboards, make it easy to identify idle servers. Since these servers still draw approximately 60 percent of their maximum power requirements, identifying them can help adjust server provisioning and workload balancing to drive up utilization.


Hot spots can also be identified. Knowing which servers or racks are consistently running hot can allow adjustments to the airflow handlers, cooling systems, or workloads to bring the temperature down before any equipment is damaged or services disrupted.


Visibility of the thermal patterns can be put to use for adjusting the ambient temperature in a data center. Every degree that temperature is raised equates to a significant reduction in cooling costs. Therefore, many data centers operate at higher ambient temperatures today, especially since modern data center equipment providers warrant equipment for operation at the higher temperatures.


Some of the same energy management solutions that boost visibility also provide a range of control features. Thresholds can be set to trigger notification and corrective actions in the event of power spikes, and can even help identify the systems that will be at greatest risk in the event of a spike. Those servers operating near their power and temperature limits can be proactively adjusted, and configured with built-in protection such as power capping.


Power capping can also provide a foundation for priority-based energy allocations. The capability protects mission-critical services, and can also extend battery life during outages. Based on knowledge extracted from historical power data, capping can be implemented in tandem with dynamic adjustments to server performance. Lowering clock speeds can be an effective way to lower energy consumption, and can yield measurable energy savings while minimizing or eliminating any discernable degradation of service levels.


Documented use cases for real-time feedback and control features such as thresholds and power capping prove that fine-grained energy management can yield significant cost reductions. Typical savings of 15 to 20 percent of the utility budget have been measured in numerous data centers that have introduced energy and temperature monitoring and control.


Understand and Utilize Energy Profiles


As the next step in the journey that began with virtualization, cloud computing is delivering on the promises for more data center agility, centralized management that lowers operating expenses, and cost-effectively meeting the needs for very fast-changing businesses.


With an intelligent energy management platform, the cloud also positions data center managers to more cost-effectively assign workloads to leverage lower utility rates in various locations. As energy prices remain at historically high levels, with no relief in sight, this provides a very compelling incentive for building out internal clouds or starting to move some services out to public clouds.


Every increase in data center agility, whether from earlier advances such as virtualization or the latest cloud innovations, emphasizes the need to understand and utilize energy profiles within the data center. Ignoring the energy component of the overall cost can hide a significant operating expense from the decision-making process.

Read more >

Increasing Scalability and Cost-Effectiveness for InterSystems Caché with the Intel® Xeon® Processor E7 v3 Family

Healthcare systems are coping with an unprecedented level of change. They’re managing a new regulatory environment, a more complex healthcare ecosystem, and an ever-increasing demand for services—all while facing intense cost pressures.


These trends are having a dramatic impact on EMR systems and healthcare databases, which have to maintain responsiveness even as they handle more concurrent users, more data, more diverse workflows, and a wider range of application functionality.


As Intel prepared to introduce the Intel® Xeon® processor E7 v3 family, we worked with engineers from Epic and InterSystems to ensure system configurations that would provide robust, reliable performance. InterSystems and VMware were also launching their next-generation solutions, so the test team ran a series of performance tests pairing the Intel Xeon processor E7-8890 v3 with InterSystems Caché 2015.1 and a beta version of VMware vSphere ESXi 6.0.


The results were impressive. “We saw the scalability of a single operational database server increase by 60 percent,” said Epic senior performance engineer Seth Hain. “With these gains, we expect our customers to scale further with a smaller data center footprint and lower total cost of ownership.” Those results were also more than triple the end-user database accesses per second (global references or GREFs) achieved using the Intel® Xeon® processor E7-4860 with Caché® 2011.1.


leibforth graph.jpg


These results show that your healthcare organization can use the Intel Xeon processor E7 v3 family to implement larger-scale deployments with confidence on a single, scale-up platform.


In addition, if you exceed the vertical scalability of a single server, you can use InterSystems Caché’s Enterprise Cache Protocol (ECP) to scale horizontally. Here again, recent benchmarks show great scalability. A paper published earlier this year reported more than a threefold increase in GREFs for horizontal scalability compared to previous-generation technologies.


This combination of outstanding horizontal and vertical scalability—in the cost-effective environment of the Intel® platform—is exactly what needed to meet rising demands and create a more agile, adaptable, and affordable healthcare enterprise.


What will these scalability advances mean for your healthcare IT decision makers and data center planners? How will they empower your organization deliver outstanding patient care and enhance efficiency? I hope you’ll read the whitepapers and share your thoughts. And please keep in mind: Epic uses many factors, along with benchmarking results, to provide practical sizing guidelines, so talk to your Epic system representative as you develop your scalability roadmap.


Read the whitepaper about vertical scalability with the Intel Xeon processor E7 v3.


Read the whitepaper about horizontal scalability  with Intel Xeon processors.


Join and participate in the Intel Health and Life Sciences Community


Follow us on Twitter: @IntelHealth, @IntelITCenter, @InterSystems, @vmwareHIT


Steve Leibforth is a Strategic Relationship Manager at Intel Corporation

Read more >

Network World Article Highlights Advances in iWARP Specification

The industry continues to advance the iWARP specification for RDMA over Ethernet, first ratified by the Internet Engineering Task Force (IETF) in 2007.


This article in Network World, “iWARP Update Advances RDMA over Ethernet for Data Center and Cloud Networks,” co-authored by myself and Wael Noureddine of Chelsio Communications, describes two new extensions that have been added to help software developers of RDMA code by aligning iWARP more tightly with RDMA technologies that are based on the InfiniBand network and transport, i.e., InfiniBand itself and RoCE. By bringing these technologies into alignment, we move closer toward the goal of the Open Fabrics Alliance, that the application developer need not concern herself with which of these is the underlying network technology — RDMA will “just work” on all.


Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.


*Other names and brands may be claimed as the property of others.

Read more >