Have you noticed our new TV ad campaign starring Jim Parsons from “The Big Bang Theory”? Each spot highlights a single, innovative experience made possible by Intel technology. The ads are short and really fast-paced, so you might not have … Read more >
Recent Blog Posts
Ever feel more like a construction worker hauling bricks in your data centre, when you’d like to be a visionary architect? You’re probably not the only one. Let’s consider a typical data centre of today. It’s likely we’ll find compute, storage and networking resources, each sitting in its own silo, merrily doing its own thing. This is a hardware-defined model, where many appliances have a fixed function that can’t be changed. Despite virtualisation helping to make managing compute servers more efficient and improving flexibility, management of other resources, and of the data centre overall, is generally manual and slow. Like building a house, it requires time, cost and heavy lifting, and results in a fairly static edifice. If you want to add an extension at a later date, you’ll need to haul more bricks.
At Intel, we envision an architectural transformation for the data centre that will change all this. This is the move to software-defined infrastructure, where the private cloud is as elastic, scalable, automated and therefore efficient as your public cloud experience, which I described in my last blog post.
I’d say today’s data centre is at an inflection point, where compute servers are already making inroads to SDI (a virtual machine is essentially a software-defined server after all). Now we need to apply the same principle to storage and networking as well. When all of these resource pools are virtualised, we can manage them in the same automated and dynamic way that we do servers, creating resources that fit our business needs rather than letting the infrastructure define how we work. It’s as if you could move the walls around within your house, add new windows or remove a bathroom, whenever you liked, with great agility and without any additional costs, time or labour.
A Data Center for the Application Economy
So how does SDI work in practice? Let’s look at it from the top down, starting with the applications. Whether they’re your email program, customer-facing sales website, CRM or ERP system, applications are what drive your business. Indeed, the application economy is apparently now ‘bigger than Hollywood’, with the iOS App Store alone billing $10 billion on apps in 2014. They’re your most important point of contact with your customers, and possibly employees and partners too, and they have strict SLAs. These may include response times, security levels, availability, the location in which their data is held, elasticity to meet peaks and troughs in demand, or even the amount of power they use. Meeting these SLAs means allocating the right resources in the data centre, across compute, storage and networking.
This allocation is handled by the next layer down – the Orchestration layer. It’s here that you can automate management of your data centre resources and allocate them dynamically depending on application needs. These resource pools, which are the foundation layer of the data centre, can be used for compute, networking or storage as required, allocated on-demand in an automated manner. Fixed-function appliances are now implemented as virtual resource pools, meaning you can make them into whatever architectural feature you like.
Oh, the possibilities! While this big change in data centre operations may be daunting, the benefits that SDI can bring in terms of driving time, cost and labour out of your business make it worth the effort. Orchestration optimises infrastructure and will reduce IT admin costs and enable your valuable team to focus more on strategic projects; while software-defined storage and networking cut your infrastructure hardware costs. Intel estimates that this could result in a relative cost saving of up to 66 percent per virtual machine instance for a data centre running full orchestration with SDI, versus one just starting virtualisation. With IDC predicting data centre operational costs to more than double every eight years, procrastination will only result in more cost in the long run.
As with any ambitious building project though, it’s important to plan carefully. I’ll be continuing this blog series by examining the four key architectural aspects of the software-defined data centre, and explaining how Intel is addressing each of them to equip clients with the best tools. These areas are:
- Transforming the network
- Determining and building infrastructure attributes and composable architectures
- Unleashing the potential of your SDI data centre
Check back soon for the next installment and do let me know your thoughts in the meantime. What sort of data centre renovations would you make given the freedom of time, cost and grunt work?
My first blog on data centers can be found here: Is Your Data Center Ready for the IoT Age?
1 Source: Intel Finance, 2014. Individual IT mileage may vary depending on workload, IT maturity and other factors. SDI assumes future benefits. Projections are extrapolations from Intel IT data. Private cloud model based on actual datacenter operations. IT DC based on Intel finance estimation for typical enterprise costs. Hybrid cloud model based on forward looking future benefits and market cost trends. Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance or cost.
Thousands of Software developers visited the Intel® Software booth at the sold-out Microsoft Build conference April 29th – May 1st in San Francisco’s Moscone Center. Developers engaged with Intel SW… Read more
Follow Gael on Twitter: @GaelHof Unity Resource Center for x86 Support Download: Unity Optimization Guide for x86 Android My Google + profile Check out my Slideshare profile If you develop Unity3D applications for Intel Architecture, you will want to become familiar with the Unity Resource … Read more >
It’s been a while since the last Meshcentral announcement for good raison, I was on sabbatical. So far this year it’s been an amazing time for Meshcentral and the Mesh team. In the last few weeks, I… Read more
Few things are as undeniable as the inherent forward momentum of human progress that is made possible by the Internet of Things (IoT). In Japan recently, we saw once again how the power of collaboration is leading to a new … Read more >
The post Fujitsu & Intel Collaborate on a New Era of IoT Platform Solutions appeared first on IoT@Intel.
Brian DeVore, Director Healthcare Strategy and Ecosystem Intel Corporation Prashant Shah, Health IT Architect Intel Corporation Alice Borrelli, Director Global Healthcare Policy Intel Corporation As Meaningful Use Stage 3 comments are being filed this week, we’d like to highlight another … Read more >
The post Intel’s Purchasing Power Drives EHR Interoperability appeared first on Policy@Intel.
Telehealth is often touted as a potential cure for much of what ails healthcare today. At Indiana’s Franciscan Visiting Nurse Service (FVNS), a division of Franciscan Alliance, the technology is proving that it really is all that. Since implementing a telehealth program in 2013, FVNS has seen noteworthy improvements in both readmission rates and efficiency.
I recently sat down with Fred Cantor, Manager of Telehealth and Patient Health Coaching at Franciscan, to talk about challenges and opportunities. A former paramedic, emergency room nurse and nursing supervisor, Fred transitioned to his current role in 2015. His interest in technology made involvement in the telehealth program a natural fit.
At any one time, Fred’s staff of three critical care-trained monitoring nurses, three installation technicians and one scheduler is providing care for approximately 1,000 patients. Many live in rural areas with no cell coverage – often up to 90 minutes away from FVNS headquarters in Indianapolis.
Patients who choose to participate in the telehealth program receive tablet computers that run Honeywell LifeStream Manager* remote patient monitoring software. In 30-40 minute training sessions, FVNS equipment installers teach patients to measure their own blood pressure, oxygen, weight and pulse rate. The data is automatically transmitted to LifeStream and, from there, flows seamlessly into Franciscan’s Allscripts™* electronic health record (EHR). Using individual diagnoses and data trends recorded during the first three days of program participation, staff set specific limits for each patient’s data. If transmitted data exceeds these pre-set limits, a monitoring nurse contacts the patient and performs a thorough assessment by phone. When further assistance is needed, the nurse may request a home visit by a field clinician or further orders from the patient’s doctor. These interventions can reduce the need for in-person visits requiring long-distance travel.
FVNS’ telehealth program also provides patient education via LifeStream. For example, a chronic heart failure (CHF) patient experiencing swelling in the lower extremities might receive content on diet changes that could be helpful.
Since the program was implemented, overall readmission rates have been well below national averages. In 2014, the CHF readmission rate was 4.4%, compared to a national average of 23%. The COPD rate was 5.47%, compared to a national average of 17.6%, and the CAD/CABG/AMI rate was 2.96%, compared to a national average of 18.3%.
Despite positive feedback, medical staff resistance remains the biggest hurdle to telehealth adoption. Convincing providers and even some field staff that, with proper training, patients can collect reliable data has proven to be a challenge. The telehealth team is making a concerted effort to engage with patients and staff to encourage increased participation.
After evaluating what type of device would best meet the program’s needs, Franciscan decided on powerful, lightweight tablets. The touch screen devices with video capabilities are easily customizable and can facilitate continued program growth and improvement.
In the evolving FVNS telehealth program, Fred Cantor sees a significant growth opportunity. With knowledge gained from providing the service free to their own patients, FVNS could offer a private-pay package version of the program to hospital systems and accountable care organizations (ACOs).
Is telehealth a panacea? No. Should it be a central component of any plan to reduce readmission rates and improve workflow? Just ask the patients and healthcare professionals at Franciscan VNS.
- Join the debate: Intel Health and Life Sciences Community
- Telehealth: Set to Increase Tenfold and Help Nurses Provide Even Better Care
- Intel & Nursing: Read more from Joan Hankin RN, NP
I recently had an opportunity to discover how the citizenM* hotel in Amsterdam is pioneering a move towards the new digital hotel; using mobile devices to create better customer experiences, and consolidated analytics to better understand customers.
Its aim is to tackle a significant challenge in the hospitality industry: personalizing the service to the needs of each guest. The answer, it believes, is to restructure all the disparate systems it previously had for storing guest data with a single guest profile. As a result, the hotel has the potential, in the future, to be able to remember your preferences for room temperature and TV channel. They’ll even know how you like your coffee.
Employees will have a tailored dashboard that provides the information they need for their role, such as whether a room is empty (for the cleaners) or what the guest’s favorite tipple is (for the bar staff).
Consumerization in Hotel Technology
The hospitality industry has often used tablets for employees, but a novel twist at citizenM is that each room will have a tablet guests can use to control the TV, windows, radio and lights from a single device.
This is one more example of how hotels worldwide are exploring ways they can streamline the check-in process to improve the guest experience. Starwood is planning to use smartphones or tablets for digital room keys, and Nine Zero is offering iris-scanning for access to the penthouse suite, for example.
Using the Cloud to Synchronize Data
The data will be stored in a Microsoft Azure* cloud using servers based on the Intel® Xeon® processor, with a local server used for backup should internet connection drop. The idea is to use the cloud to synchronize customer data between hotels, so a hotel in London could remember your preferences from Amsterdam, for example.
Sharing Data on the Service Bus with IreckonU
The solution is called IreckonU* and was developed by Dutch software company Ireckon. IreckonU* is built on a highly scalable base layer, which consists of a service bus and middleware containing business-specific logic. All the systems plug into the service bus, from the website booking systems to the minibar, so they can all communicate effectively with each other. Using this architecture, there are none of the maintenance and support headaches usually associated with point-to-point integration because each application just needs to be connected to the bus. At the same time, all the applications will be able to access the full guest profile, and update it to ensure there is a complete picture of the guest. The solution includes several standard building blocks. They enable the hotel to:
- Create dashboards for employees, from the CEO to the housekeeper.
- Optimize reservation flows, room availability and status, housekeeping, payment and client interaction.
- Provide a personalized guest experience in the hotel and with external services such as flight data.
The ireckonU also provides hotel and hospitality brands with a whole range of additional features out-of-the-box so expect to see some of these services starting to appear in hotels in the not-too-distant future:
- If you’re feeling peckish, you’ll be able to order room service through your tablet.
- If you prefer, you will be able to use your own tablet instead of the one provided to control the room. To ensure a great performance, each room will have its own private WiFi network too.
- You’ll be able to set your alarm according to your flight time, and the system will be able to let you sleep in later and reschedule your taxi if your flight is delayed. Now that’s what I call service!
- You’ll even be able to use your phone to check in, and unlock your room door using an app. This will avoid any delays at the reception desk on arrival, and will spare guests the need to carry a separate key with them.
Watch the video to see the whole thing through the eyes of a guest. It provides a real insight into how hotel brands like citizenM and software company ireckon are approaching the challenges in today’s hospitality industry. I’d love to hear your thoughts on the hotel of the future in the comments below.
*Other names and brands may be claimed as the property of others.
Check out my previous posts
Intel® Wireless Docking is most definitely here, as proven by HP’s recent launch of their HP Elite x2 1011 and HP Advanced Wireless Dock, and Dell’s launch of their Latitiude 7250, 7450, 5250, 5550 and Dell Wireless Dock. Intel® Wireless Docking … Read more >
The rapid evolution of retail, from digital signage to virtual shopping, wouldn’t be possible without collaboration, optimism, and futuristic IoT envisioning. In this guest blog post, INOX Communication Founder and CEO Lats Kladny, reveals how envisioning the future of the … Read more >
The post INOX Communication Boosts the Shopping Experience With Robust Intel IoT Retail Solutions appeared first on IoT@Intel.
This blog was posted on behalf of Zach King, a Program Manager at Intel’s University Program Office (UPO). Zach and other UPO members support several strategically important collaborations with leading Computer Science and Engineering universities. Who would … Read more >
The post Cow Whisperers Win Intel® NUC Challenge at Portland State appeared first on CSR@Intel.
Follow Gael on Twitter: @GaelHof Subscribe to the IntelDeveloperZone Subreddit My Google + profile Check out my Slideshare profile I was just going through my Twitter analytics and thought I would share what topics my followers are the most interested in. While I … Read more >
In enterprise IT and service provider environments these days, you’re likely to hear lots of discussion about software-defined infrastructure. In one way or another, everybody now seems to understand that IT is moving into the era of SDI.
There are good reasons for this transformation, of course. SDI architectures enable new levels of IT agility and efficiency. When everything is managed and orchestrated in software, IT resources— including compute, storage, and networking—can be provisioned on demand and automated to meet service-level agreements and the demands of a dynamic business.
For most organizations, the question isn’t, “Should we move to SDI?” It’s, “How do we get there?” In a previous post, I explored this topic in terms of a high road that uses prepackaged SDI solutions, a low road that relies on build-it-yourself strategies, and a middle road that blends the two approaches together.
In this post, I will offer up a maturity-model framework for evaluating where you are in your journey to SDI. This maturity model has five stages in the progression from traditional hard-wired architecture to software-defined infrastructure. Let’s walk through these stages.
At this stage of maturity, the IT organization has standardized and consolidated servers, storage systems, and networking devices. Standardization is an essential building block for all that follows. Most organizations are already here.
By now, most organizations have leveraged virtualization in their server environments. While enabling high level of consolidation and greater utilization of physical resources, server virtualization accelerates service deployment and facilitates workload optimization. The next step is to virtualize storage and networking resources to achieve similar gains.
At this stage, IT resources are pooled and provisioned in an automated manner. In a step toward a cloud-like model, automation tools enable the creation of self-service provisioning portals—for example, to allow a development and test team to provision its own infrastructure and to move closer to a frictionless IT organization.
At this higher stage of IT maturity, an orchestration engine optimizes the allocation of data center resources. It collects hardware platform telemetry data and uses that information to place applications on the best servers, with features that enable acceleration of the workloads, located in approved locations for optimal performance and the assigned levels of trust. The orchestration engine acts as an IT watchdog that spots performance issues and takes remedial actions—and then learns from the events to continue to meet or exceed the customer’s needs.
At this ultimate stage—the stage of the real-time enterprise—an organization uses IT service management software to maintain targeted service levels for each application in a holistic manner. Resources are automatically assigned to applications to maintain SLA compliance without manual intervention. The SDI environment makes sure the application gets the infrastructure it needs for optimal performance and compliance with the policies that govern it.
In subsequent posts, I will take a closer look at the Automated, Orchestrated, and SLA Managed stages. For now, the key is to understand where your organization falls in the SDI maturity model and what challenges need to be solved in order to take this journey. This understanding lays the groundwork for the development of strategies that move your data center closer to SDI—and the data center of the future.
The folks here at Intel see me as a bit of an Ubuntu fanboy. For the most part, I spend a lot of time in the lab working on the ONP Server using CentOS. For me, it is less about being an Ubuntu… Read more
Every disruptive technology in the data center forces IT teams to rethink the related practices and approaches. Virtualization, for example, led to new resource provisioning practices and service delivery models.
Cloud technologies and services are driving similar change. Data center managers have many choices for service delivery, and workloads can be more easily shifted between the available compute resources distributed across both private and public data centers.
Among the benefits stemming from this agility, new approaches for lowering data center energy costs have many organizations considering cloud alternatives.
Shifting Workloads to Lower Energy Costs
Every data center service and resource has an associated power and cooling cost. Energy, therefore, should be a factor in capacity planning and service deployment decisions. But many companies do not leverage all of the energy-related data available to them – and without this knowledge, it’s challenging to make sense of information being generated by servers, power distribution, airflow and cooling units and other smart equipment.
That’s why holistic energy management is essential to optimizing power usage across the data center. IT and facilities can rely more on user-friendly consoles to gain a complete picture of the patterns that correlate workloads and activity levels to power consumption and dissipated heat like graphical thermal and power maps of the data center. Specific services and workloads can also be profiled, and logged data helps build a historical database to establish and analyze temperature patterns. Having one cohesive view of energy consumption also reduces the need to rely on less accurate theoretical models, manufacturer specifications or manual measurements that are time consuming and quickly out of date.
A Case for Cloud Computing
This makes the case for cloud computing as a means to manage energy costs. Knowing how workload shifting will decrease the energy requirements for one site and increase them for another makes it possible to factor in the different utility rates and implement the most energy-efficient scheduling. Within a private cloud, workloads can be mapped to available resources at the location with the lowest energy rates at the time of the service request. Public cloud services can be considered, with the cost comparison taking into account the change to the in-house energy costs.
From a technology standpoint, any company can achieve this level of visibility and use it to take advantage of the cheapest energy rates for the various data center sites. Almost every data center is tied to at least one other site for disaster recovery, and distributed data centers are common for a variety of reasons. Add to this scenario all of the domestic and offshore regions where Infrastructure-as-a-Service is booming, and businesses have the opportunity to tap into global compute resources that leverage lower-cost power and in areas where infrastructure providers can pass through cost savings from government subsidies.
Other Benefits of Fine-Grained Visibility
For the workloads that remain in the company’s data centers, increased visibility also arms data center managers with knowledge that can drive down the associated energy costs. Energy management solutions, especially those that include at-a-glance dashboards, make it easy to identify idle servers. Since these servers still draw approximately 60 percent of their maximum power requirements, identifying them can help adjust server provisioning and workload balancing to drive up utilization.
Hot spots can also be identified. Knowing which servers or racks are consistently running hot can allow adjustments to the airflow handlers, cooling systems, or workloads to bring the temperature down before any equipment is damaged or services disrupted.
Visibility of the thermal patterns can be put to use for adjusting the ambient temperature in a data center. Every degree that temperature is raised equates to a significant reduction in cooling costs. Therefore, many data centers operate at higher ambient temperatures today, especially since modern data center equipment providers warrant equipment for operation at the higher temperatures.
Some of the same energy management solutions that boost visibility also provide a range of control features. Thresholds can be set to trigger notification and corrective actions in the event of power spikes, and can even help identify the systems that will be at greatest risk in the event of a spike. Those servers operating near their power and temperature limits can be proactively adjusted, and configured with built-in protection such as power capping.
Power capping can also provide a foundation for priority-based energy allocations. The capability protects mission-critical services, and can also extend battery life during outages. Based on knowledge extracted from historical power data, capping can be implemented in tandem with dynamic adjustments to server performance. Lowering clock speeds can be an effective way to lower energy consumption, and can yield measurable energy savings while minimizing or eliminating any discernable degradation of service levels.
Documented use cases for real-time feedback and control features such as thresholds and power capping prove that fine-grained energy management can yield significant cost reductions. Typical savings of 15 to 20 percent of the utility budget have been measured in numerous data centers that have introduced energy and temperature monitoring and control.
Understand and Utilize Energy Profiles
As the next step in the journey that began with virtualization, cloud computing is delivering on the promises for more data center agility, centralized management that lowers operating expenses, and cost-effectively meeting the needs for very fast-changing businesses.
With an intelligent energy management platform, the cloud also positions data center managers to more cost-effectively assign workloads to leverage lower utility rates in various locations. As energy prices remain at historically high levels, with no relief in sight, this provides a very compelling incentive for building out internal clouds or starting to move some services out to public clouds.
Every increase in data center agility, whether from earlier advances such as virtualization or the latest cloud innovations, emphasizes the need to understand and utilize energy profiles within the data center. Ignoring the energy component of the overall cost can hide a significant operating expense from the decision-making process.
Increasing Scalability and Cost-Effectiveness for InterSystems Caché with the Intel® Xeon® Processor E7 v3 Family
Healthcare systems are coping with an unprecedented level of change. They’re managing a new regulatory environment, a more complex healthcare ecosystem, and an ever-increasing demand for services—all while facing intense cost pressures.
These trends are having a dramatic impact on EMR systems and healthcare databases, which have to maintain responsiveness even as they handle more concurrent users, more data, more diverse workflows, and a wider range of application functionality.
As Intel prepared to introduce the Intel® Xeon® processor E7 v3 family, we worked with engineers from Epic and InterSystems to ensure system configurations that would provide robust, reliable performance. InterSystems and VMware were also launching their next-generation solutions, so the test team ran a series of performance tests pairing the Intel Xeon processor E7-8890 v3 with InterSystems Caché 2015.1 and a beta version of VMware vSphere ESXi 6.0.
The results were impressive. “We saw the scalability of a single operational database server increase by 60 percent,” said Epic senior performance engineer Seth Hain. “With these gains, we expect our customers to scale further with a smaller data center footprint and lower total cost of ownership.” Those results were also more than triple the end-user database accesses per second (global references or GREFs) achieved using the Intel® Xeon® processor E7-4860 with Caché® 2011.1.
These results show that your healthcare organization can use the Intel Xeon processor E7 v3 family to implement larger-scale deployments with confidence on a single, scale-up platform.
In addition, if you exceed the vertical scalability of a single server, you can use InterSystems Caché’s Enterprise Cache Protocol (ECP) to scale horizontally. Here again, recent benchmarks show great scalability. A paper published earlier this year reported more than a threefold increase in GREFs for horizontal scalability compared to previous-generation technologies.
This combination of outstanding horizontal and vertical scalability—in the cost-effective environment of the Intel® platform—is exactly what needed to meet rising demands and create a more agile, adaptable, and affordable healthcare enterprise.
What will these scalability advances mean for your healthcare IT decision makers and data center planners? How will they empower your organization deliver outstanding patient care and enhance efficiency? I hope you’ll read the whitepapers and share your thoughts. And please keep in mind: Epic uses many factors, along with benchmarking results, to provide practical sizing guidelines, so talk to your Epic system representative as you develop your scalability roadmap.
Read the whitepaper about vertical scalability with the Intel Xeon processor E7 v3.
Read the whitepaper about horizontal scalability with Intel Xeon processors.
Join and participate in the Intel Health and Life Sciences Community
Follow us on Twitter: @IntelHealth, @IntelITCenter, @InterSystems, @vmwareHIT
Steve Leibforth is a Strategic Relationship Manager at Intel Corporation
VTune(TM) Amplifier XE 2015 can analyze MPI processes combined in hybrid codes in cluster system. It means that VTune Amplifier runs parallel MPI program on N ranks to collect performance data, then… Read more
The industry continues to advance the iWARP specification for RDMA over Ethernet, first ratified by the Internet Engineering Task Force (IETF) in 2007.
This article in Network World, “iWARP Update Advances RDMA over Ethernet for Data Center and Cloud Networks,” co-authored by myself and Wael Noureddine of Chelsio Communications, describes two new extensions that have been added to help software developers of RDMA code by aligning iWARP more tightly with RDMA technologies that are based on the InfiniBand network and transport, i.e., InfiniBand itself and RoCE. By bringing these technologies into alignment, we move closer toward the goal of the Open Fabrics Alliance, that the application developer need not concern herself with which of these is the underlying network technology — RDMA will “just work” on all.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.