Recent Blog Posts

Transform Data Centre Networking, Without the Small Talk

Intels-history-of-networking-data-cetner.pngIf I asked you to play a round of word associations starting with ‘Intel’, I doubt many of you would come back with ‘networking’. Intel is known for a lot of other things, but would it surprise you to know that we’ve been in the networking space for more than 30 years, collaborating with key leaders in the industry? I’m talking computer networking here of course, not the sort that involves small talk in a conference centre bar over wine and blinis. We’ve been part of the network journey from the early Ethernet days, through wireless connectivity, datacentre fabric and on to silicon photonics. And during this time we’ve shipped over 1 billion Ethernet ports.

 

As with many aspects of the move to the software-defined infrastructure, networking is changing – or if it’s not already, it needs to. We’ve spoken in this blog series about the datacentre being traditionally hardware-defined, and this is especially the case with networking. Today, most networks consist of a suite of fixed-function devices – routers, switches, firewalls and the like. This means that the control plane and the data plane are combined with the physical device, making network (re)configuration and management time-consuming, inflexible and complex. As a result, a datacentre that’s otherwise fully equipped with the latest software-defined goodies could still be costly and lumbering. Did you know, for example, that even in today’s leading technology companies, networking managers have weekly meetings to discuss what changes need to be made to the network (due to the global impact even small changes can have), which can then take further weeks to implement? Ideally, these changes should be made within hours or even minutes.

 

So we at Intel (and many of our peers and customers) are looking at how we can take the software-defined approach we’ve used with compute and apply it to the network as well. How, essentially, do we create a virtualised pool of network resources that runs on industry-standard hardware and that we can manage using our friend, the orchestration layer? We need to separate the control plane from the data plane.

 

Intels-workload-consolidation-strategy.png

Building virtual foundations

 

The first step in this journey of network liberation is making sure the infrastructure is in place to support it. Historically, traditional industry-standard hardware wasn’t designed to deal with networking workloads, so Intel adopted a 4:1 workload consolidation strategy which uses best practices from the telco industry to optimise the processing core, memory, I/O scalability and performance of a system to meet network requirements. In practice, this means combining general-purpose hardware with specially designed software to effectively and reliably manage network workloads for application, control, packet and signal processing.

 

With this uber-foundation in place, we’re ready to implement our network resource pools, where you can run a previously fixed network function (like a firewall, router or load balancer) on a virtual machine (VM) – just the same as running a database engine on a VM. This is network function virtualisation, or NFV, and it enables you to rapidly stand up a new network function VM, enabling you to meet those hours-and-minutes timescales rather than days-and-weeks. It also effectively and reliably addresses OpEx and manual provisioning challenges associated with a fixed-function network environment in the same way that compute virtualisation did for your server farm. And the stronger your fabric, the faster it’ll work – this is what’s driving many data centre managers to consider upgrading from 10Gb Ethernet, through to 40Gb Ethernet and on to 100Gb Ethernet.

 

Managing what you’ve built

 

So, hooray! We now have a path to virtualising our network functions, so we can take the rest of the week off, right? Well, not quite. The next area I want to address is software-defined networking (SDN), which is about how you orchestrate and manage your shiny new virtual network resources at a data centre level. It’s often confused with NFV but they’re actually separate and complementary approaches.

 

Again, SDN is nothing new as a concept. Take storage for example – you used to buy a fixed storage appliance, which came with management tools built-in. However, now it’s common to break the management out of the fixed appliance and manage all the resources centrally and from one location. It’s the same with SDN, and you can think of it as “Network Orchestration” in the context of SDI.

 

With SDN, administrators get a number of benefits:

 

  • Agility. They can dynamically adjust network-wide traffic flow to meet changing needs agilely and in near real-time.
  • Central management. They can maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
  • Programmatic configuration. They can configure, manage, secure and optimise network resources quickly, via dynamic, automated SDN programs which they write themselves, making them tailored to the business.
  • Open standards and vendor neutral. They get simplified network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols. This open standards point is key from an end user perspective as it enables centralised management.

 

Opening up

 

There’s still a way to go with NFV and SDN, but Intel is working across the networking industry to enable the transformation. We’re doing a lot of joint work in open source solutions and standards, such as OpenStack.org – unified computing management including networking, OpenDaylight.org – a platform for network programmability, and also the Cisco* Opflex Protocol – an extensible policy protocol. We’re also looking at how we proceed from here, and what needs to be done in order to build an open, programmable ecosystem.

 

Today I’ll leave you with this short interview with one of our cloud architects, talking about how Intel’s IT team has implemented software-defined, self-service networking. My next blog will be the last in this current series, and we’ll be looking at that other hot topic for all data centre managers – analytics. In the meantime, I’d love to hear your thoughts on how your business could use SDN to drive time, cost and labour out of the data centre.

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.


*Other names and brands may be claimed as the property of others.

Read more >

SDI: The Foundation for Cloud

When it comes to the cloud, there is no single answer to the question of how to ensure the optimal performance, scalability, and portability of workloads. There are, in fact, many answers, and they are all tied to the interrelated layers of the software-defined infrastructure (SDI) stack. The recently announced Intel Cloud for All Initiative is focused directly at working with cloud software vendors and the community to deliver fully optimized SDI stacks that can serve a wide array of apps and data.  To better understand the underlying strategy driving the Cloud for All Initiative, it’s important to see the relationships between each layer of the SDI stack.

 

In this post, we will walk through the layers of the SDI stack, as shown here.

 

sdi-stack.png

 

The foundation

 

The foundation of Software Defined Infrastructure is the creation of infrastructure resource pools establishing compute, storage and network services.  These resource pools utilize the performance and platform capabilities of Intel architecture, to enable applications to understand and then control what they utilize. Our work with the infrastructure ecosystem is focused on ensuring that the infrastructure powering the resource pools is always optimized for a wide array of SDI stacks.

The OS layer

 

At the operating system level, the stack includes commonly used operating systems and software libraries that allow applications to achieve optimum performance while enabling portability from one environment to another. Intel has a long history of engineering with both OS vendors and the community, and has extended this work to extend to light weight OS that provide greater efficiency for cloud native workloads.

 

The Virtualization layer

 

Moving up the stack, we have the virtualization layer, which is essential to software-defined infrastructure. Without virtualization, SDI would not be possible. But in this context, virtualization can include more than just typical hypervisors. In order to establish resources pools the infrastructure components of compute, storage, and network are virtualized through various means.  The most optimum resource pools are those that can continue to scale out to meet the growing needs of their consumers. Last but not least, the performance isolation provided by containers can be considered OS virtualization which has enabled a whole new set of design patterns for developers to use.  For both containers and hypervisors, Intel is working with software providers to fully utilize the capabilities of Intel® Virtualization Technology (Intel® VT) to drastically reduce performance overhead and increase security isolation.  For both storage and network, we have additional libraries and instruction sets that help deliver the best performance possible for this wide array of infrastructure services.

 

The Orchestration layer

 

There are numerous orchestration layers and schedulers available, however for this discussion we will focus on those being built in the open; OpenStack, Apache Mesos, and Kubernetes.  This layer provides central oversight of the status of the infrastructure, what is allocated and what is consumed, how applications or tenants are deployed, and how to best meet the goals of most DC infrastructure teams…. Increase utilization while maintaining performance. Intel’s engagement within the orchestration layer focuses on working with the industry to both harden this layer as well as bring in advanced algorithms that can help all DC’s become more efficient.  Some examples are our work in the OpenStack community to improve the availability of the cloud services themselves, and to provide rolling upgrades so that the cloud and tenants are always on.  In Mesos, we are working to help users of this technology use all available computing slack so they can improve their TCO.

 

The Developer environment

 

The entire SDI infrastructure is really built to power the developers code and data which all of us as consumers use every day of our life.  Intel has a long history of helping improve debugging tools, making it easier for developers to move to new design patterns like multi-threaded, and now distributed systems, and helping developers get the most performance out of their code.  We will continue to increase our focus here to make sure that developers can focus on making the best SW, and let the tools help them build always on highly performant apps and services.

 

For a close-up look at Intel’s focus on standards-based innovation for the SDI stack, check out the related sessions at the Intel Developer Forum, which takes place August 18 – 20 in San Francisco. These events will include a class that dives down into the Intel vision for the open, standards-based SDI stacks that are the key to mainstream cloud adoption.

Read more >

Taking the Training Wheels off Cloud with Cloud for All Initiative

Cloud computing has been a tremendous driver of business growth over the past five years.  Digital services such as Uber, AirBnB, Coursera, and Netflix have defined consumer zeitgeist while redefining entire industries in the process.  This first wave of cloud fueled business growth, has largely been created by businesses leveraging cloud native applications aimed at consumer services.  Traditional enterprises who seek the same agility and efficiency that the cloud provides have viewed migration of traditional enterprise applications to the cloud as a slow and complex challenge.  At the same time, new cloud service providers are seeking to compete on a cost parity with large providers, and industry standard solutions that can help have been slow in arriving.  The industry simply isn’t moving fast enough to address these very real customer challenges, and our customers are asking for help.

 

To help solve these real issues, Intel is announcing the Cloud for All Initiative with the goal of accelerating the deployment of tens of thousands of clouds over the next five years. This initiative is focused solely on cloud adoption to deliver the benefits of cloud to all of our customers.  This represents an enormous efficiency and strategic transition for Enterprise IT and Cloud Service Providers.  The key to delivering the efficiency of the cloud to the enterprise is rooted in software defined infrastructure. This push for more intelligent and programmable infrastructure is something that we’ve been working on at Intel for several years. The ultimate goal of Software Defined Infrastructure is one where compute, storage and network resource pools are dynamically provisioned based on application requirements.

 

Cloud for All has three key objectives:

 

  1. Invest in broad industry collaborations to create enterprise ready, easy to deploy SDI solutions
  2. Optimize SDI stacks for high efficiency across workloads
  3. Align the industry towards standards and development focus to accelerate cloud deployment

 

Through investment, Intel will utilize our broad ecosystem relationships to ensure that a choice of SDI solutions supporting both traditional enterprise and cloud native applications are available in easy to consume options.  This work will include scores of industry collaborations that ensure SDI stacks have frictionless integration into data center infrastructure.

 

Through optimization, Intel will work with cloud software providers to ensure that SDI stacks are delivered with rich enterprise feature sets, highly available and secure, and scalable to thousands of nodes.  This work will include the full optimization of software to take advantage of Intel architecture features and technologies like Intel virtualization technology, cloud integrity technology, and platform telemetry, all to deliver optimal enterprise capabilities.

 

Through industry alignment, Intel will use its leadership role in industry organizations as well as our work with the broad developer community to ensure that the right standards are in place to ensure workloads have true portability across clouds. This standardization will help enterprises have the confidence to deploy a mix of traditional and cloud native applications.

 

This work has already started.  We have been engaged in the OpenStack community for a number of years as a consumer, and more recently our integration into the Foundation board last year. We have used that user and leadership position to push for features needed in the enterprise. Our work does not stop there however, over the past few months we’ve announced collaborations with cloud software leaders including CoreOS, Docker and Red Hat highlighting enterprise readiness for OpenStack and container solutions.  We’ve joined with other industry leaders to form the Open Container Initiative and Cloud Native Computing Foundation to drive the industry standards and frameworks for cloud native applications.

 

Today, we’ve announced our next step in Cloud for All with a strategic collaboration with Rackspace, the co-founder of OpenStack and a company with a deep history of collaboration with Intel.  We’ve come together to deliver a stable, predictable, and easy to operate enterprise ready OpenStack scalable to 1000’s of nodes. This will be accomplished through the creation of the OpenStack Innovation Center. Where we will be assembling large developer teams across Intel and Rackspace to work together to address the key challenges facing the Openstack platform.  Our upstream contributions will align with the priorities of the OpenStack Foundation’s Enterprise Workgroup. To facilitate this effort we will create the Hybrid Cloud Testing Cluster, a large scale environment, open to all developers in the community wishing to test their code at scale with the objective of improving the OpenStack platform.  In total, we expect this collaboration to engage hundreds of new developers internally and through community engagement to address critical requirements for the OpenStack community.

 

Of course, we’ve only just begun.  You can expect to hear dozens of announcements from us in the coming year including additional investments and collaborations, as well as the results of our optimization and delivery.  I’m delighted to be able to share this journey with you as Cloud for All gains momentum. We welcome discussion on how Intel can best work with industry leaders and customers to deliver the goals of Cloud for All to the enterprise.

Read more >

Intel® Cloud for All initiative to Invest in OpenStack Community

Intel is a strong believer in the value cloud technology offers businesses around the world. It has enabled new usage models like Uber*, Waze*, Netflix* and even AirBnB*. Unfortunately, access to this technology has been limited because industry solutions are … Read more >

The post Intel® Cloud for All initiative to Invest in OpenStack Community appeared first on Intel Software and Services.

Read more >

Intel IoT and QNX Software Systems Collaborate on ADAS Solutions

Much like two perfectly synced road trip buddies who have been traveling together for decades, Intel and QNX Software Systems, a subsidiary of BlackBerry, are embarking on a new adventure: advancing in-vehicle technologies that deliver new driving experiences. The collaboration … Read more >

The post Intel IoT and QNX Software Systems Collaborate on ADAS Solutions appeared first on IoT@Intel.

Read more >

The Impact of ITA: the Progress of Today & the Promise of Tomorrow

This past Saturday, the World Trade Organization (WTO) made significant progress toward expanding the Information Technology Agreement (ITA), an accord that has advanced innovation, trade and economic growth around the world for the past 18 years. In today’s rapidly evolving … Read more >

The post The Impact of ITA: the Progress of Today & the Promise of Tomorrow appeared first on Policy@Intel.

Read more >

10 Mobile BI Strategy Questions: Technology Infrastructure

man-looking-at-enterprise-mobility-devices.pngWhen an organization is considering implementing a mobile BI strategy , it needs to ask/consider if its current information technology (IT) and business intelligence (BI) infrastructure can support mobile BI. It must determine if there are any gaps that need to be addressed prior to going live.

 

When we think of an end-to-end mobile BI solution, there are several areas that can impact the user experience. I refer to them as choke points. Some of the risks associated with these choke points can be eliminated; others will have to be mitigated. Depending on the business model and how the IT organization is set up, these choke points may be dependent on the configuration of technology or they may hinge on processes that are embedded into business or IT operations. Evaluating both infrastructures for mobile BI readiness is the first step.

 

IT Infrastructure’s Mobile BI Readiness

 

The IT infrastructure typically includes the mobile devices, wireless networks, and any other services or operations that will enable these devices to operate smoothly within a set of connected networks, which span those owned by the business or external networks managed by third party vendors. As mobile BI users move from one point of access to another, they consume data and assets on these connected networks and the mobile BI experience should be predictable within each network’s constraints of flexibility and bandwidth.

 

Mobile device management (MDM) systems also play a crucial role in the IT infrastructure. Before mobile users have a chance to access any dashboards or look at data on any reports, their mobile devices need to be set up first. Depending on the configuration, enablement may include device and user enrollment, single sign-on (SSO), remote access, and more.

 

Additionally, failing to properly enroll either the device or the user may result in compliance issues or other risks. It’s critical to know how much of this comes preconfigured with the device and how the user will manage these tasks. When you add to the mix the bring-your-own-device (BYOD) arrangements, the equation gets more complex.

 

BI Infrastructure’s Mobile BI Readiness

 

Once the user is enabled on the mobile device and business network, the BI infrastructure will be employed. The BI infrastructure typically includes the BI software, hardware, user profiles, and any other services or operations that will enable consumption of BI assets on mobile devices. The mobile BI software, whether it is an app or web-based solution, will need to be properly managed.

 

The first area of concern for an app-based solution is the installation of the app from an app store. For example, does the user download the app from iTunes (in the case of an iPad or iPhone) or from an IT-managed corporate app store or gallery? Is it a custom-built app developed in-house or is it part of the current BI software? Does the app come preconfigured with the company-supplied mobile device (similar to how email is set up on a PC) or is the user left alone to complete the installation?

 

When the app is installed, are we done? No. In many instances, the app would need to be configured to connect to the mobile BI servers. Moreover, this configuration step needs to come after obtaining proper authorizations, which involves entering user’s access credentials (at minimum user id and password unless SSO can be leveraged).

 

If the required authorization requests, regardless of existing BI user profiles, are not obtained in time, the user configuration can only be completed partially. More often than not, the mobile BI users will need assistance with technical and process-related topics. Hence, streamlining both installation and configurations steps will further improve the onboarding process.

 

Bottom Line

 

Infrastructure is the backbone of any technology operation, and it’s equally important for mobile BI. Close alignment with enterprise mobility, as I wrote in “10 Mobile BI Strategy Questions: Enterprise Mobility,” will help to close the gaps in many of these areas. When we’re developing a mobile BI strategy, we can’t take the existing IT or BI infrastructure for granted.

 

Where do you see the biggest gap when it comes to technology infrastructure in mobile BI planning?

 

Stay tuned for my next blog in the Mobile BI Strategy series.

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >

Cloud Native Computing Foundation Charters Standards for Cloud Native App Portability

virtual-servers-graphic.jpg

With the cloud software industry advancing on a selection of Software Defined Infrastructure ‘stacks’ to support enterprise data centers, the question of application portability comes squarely into focus. A new ‘style’ of application development has started to gather momentum in both the Public Cloud, as well as Private Cloud. Cloud native applications, as this new style has been named, are those applications that are container packaged, dynamically scheduled, and microservices oriented. They are rapidly gaining favor for their improved efficiency and agility as compared to more traditional monolithic data center applications.

 

However, creating a cloud native application, does not eliminate the dependencies on traditional data center services. Foundational services such as networking, storage, automation, and of course, compute are all still very much required.  In fact, since the concept of a full virtual machine may not be present in a cloud native application, these applications rely significantly on their infrastructure software to provide the right components. When done well, a ‘Cloud Native Application’ SDI stack can provide efficiency and agility previously only seen in a few hyperscale environments.

 

Another key aspect of the Cloud Native Application, is that it should be highly portable. This portability between environments is a massive productivity gain for both developers and operators. An application developer wants the ability to package an application component once, and have it be reusable across all clouds, both public and private. A cloud operator wants the freedom to position portions of their application where it makes the most sense. That location may be on their private cloud, or on their public cloud partner. Cloud Native Applications are the next step in true hybrid cloud usage.

 

So, with this promise of efficiency, operational agility, and portability, where do data center managers seek the definitions for how the industry will address movement of apps between stacks? How can one deploy a cloud native app and ensure it can be moved across clouds and SDI stacks without issue?  Without a firm answer, can one really develop cloud native apps with the confidence that portability will not be limited to those environments running identical SDI stacks?  These are the type of questions that often stall organizational innovation and is the reason why Intel has joined with other cloud leaders in the formation of the Cloud Native Computing Foundation (CNCF).

 

Announced this week at the first ever KuberCon event, the CNCF has been chartered to provide guidance, operational patterns, standards and over time , APIs, to ensure container based SDI stacks are both interoperable, and optimized for a seamless, performant developer experience. The CNCF will work with the recently formed Open Container Initiative (OCI) towards a synergistic goal of addressing the full scope of container standards and the supporting services needed for success.

 

Why announce this at KuberCon? The goal of the CNCF is to foster innovation in the community around these application models. The best way to speed innovation is to start with some seed technologies. Much the same way it is easier to start writing (a blog perhaps?) when you have a few sentences on screen, rather than staring at a blank page, the CNCF is starting with some seed technologies. Kubernetes, having just passed its 1.0 release, will be one of the first technologies used to kick start this effort. Many more technologies, and even full stacks, will follow, with a goal of several ‘reference’ SDI platforms that support the portability required.

 

What is Intel’s role here?  Based on our decades of experience helping lead industry innovation and standardization across computing hardware and open source software domains, we are firmly committed to the CNCF goals and plan to actively participate in the leadership body and Technical Oversight Committee of the Foundation.  This effort is reflective of our broader commitment to working with the industry to accelerate the broad use of cloud computing through delivery of optimized, easy to consume, operate, and feature complete SDI stacks.  This engagement complements our existing leadership roles in the OCI, the OpenStack Foundation, Cloud Foundry Foundation as well as our existing work driving solutions with the SDI platform ecosystem.

 

With the cloud software industry accelerating its pace of innovation, please stay tuned for more details on Intel’s broad engagement in this space. To deepen your engagement with Intel, I invite you to join us at the upcoming Intel Developer Forum in San Francisco to gain a broader perspective on Intel’s strategy for acceleration of cloud.

Read more >

Intel, Industry Highlights Latest Collaboration at KuberCon

Portland.jpg

 

 

My hometown of Portland, Oregon is home this week to the first ever KuberCon Launch event bringing together the Kubernetes ecosystem at OSCON. While the industry celebrates the delivery of Kubernetes 1.0 and formation of the Cloud Native Computing Foundation, this week is also an opportunity to gauge the state of the development around open source container solutions.

 

Why so much attention on containers? Basically, it is because containers help software developers and infrastructure operators at the same time.  This tech will help put mainstream data centers and developers on the road to the advanced, easy-to-consume, easy to ship and run, hyperscale technologies that are a hallmark of the world’s largest and most sophisticated cloud data centers. The container approach, packages up applications and software libraries to create units of computing that are both scalable and portable—two keys to the agile data center.  With the addition of Kubernetes and other key tech like Mesos, the orchestration and scheduling of the containers is making the impossible now simple.

 

This is a topic close to the hearts of many people at Intel. We are an active participant in the ecosystem that is working to bring the container model to a wide range of users and data centers as part of our broader strategy for standards based stack delivery for software defined infrastructure.  This involvement was evidenced earlier this year through our collaborations with both CoreOS and Docker, two leading software players in this space, as well as our leadership engagement in the new Open Container Project.

 

As part of the effort to advance the container cause, Intel is highlighting the latest advancements in our CoreOS collaboration to advance and optimize the Tectonic stack, a commercial distribution of Kubernetes plus CoreOS software. At KuberCon, Intel, Redapt, Supermicro and CoreOS are showing a Tectonic rack running on bare metal highlighting the orchestration and portability that Tectonic provides to data center workloads.  Local rock-star company Jive has been very successful in running their workloads on this platform showing that their app can move between public cloud and on-premise bare metal cloud.  We’re also announcing extensions of our collaboration with CoreOS to drive broad developer training for Tectonic and title sponsorship in CoreOS’s Tectonic Summit event planned for December 2nd and 3rd in New York. For details, check out the CoreOS news release.

 

We’re also featuring an integration of an OpenStack environment running Kubernetes based containers within an enterprise ready appliance.  This collaboration with Mirantis, Redapt and Dell highlights the industry’s work to drive open source SDI stacks into solutions that address enterprise customer needs for simpler to deploy solutions and demonstrate the progress that the industry has made in integrating Kubernetes with OpenStack as it reaches 1.0.

 

Our final demonstration features a new software and hardware collaboration with Mesosphere, the company behind much of the engineering for Mesos which provides the container scheduling for Twitter, Apple Siri, AirBnB among other digital giants.  Here, we’ve worked to integrate Mesosphere’s DCOS platform with Kubernetes on a curated and optimized hardware stack supplied by Quanta.  This highlights yet another example of an open source SDI stack integrating efficient container based virtualization to drive the portability and orchestration of hyperscale.

 

For a closer look at Intel’s focus on standards-based innovation for the software-defined infrastructure stack, check out my upcoming presentation at the Intel Developer Forum (Intel IDF). I’ll be detailing further advancements in our industry collaborations in delivery of SDI to the masses as well as going deeper in the technologies Intel is integrating into data center infrastructure to optimize SDI stacks for global workload requirements.

Read more >

5 IoT Questions with Smart Home Innovator Alexandra Deschamps-Sonsino

We sat down with smart home appliance designer Alexandra Deschamps-Sonsino recently to chat about emerging Internet of Things (IoT) trends for smart homes. Alexandra is the founder of Design Swarm, where she is an interaction designer, product designer, entrepreneur, and … Read more >

The post 5 IoT Questions with Smart Home Innovator Alexandra Deschamps-Sonsino appeared first on IoT@Intel.

Read more >

5 Questions for Tracey Moorhead, President, VNAA

 

As populations age around the world, home healthcare will become a more vital part of caring for senior patients. To learn more about this growing trend, and how technology can play a role, we sat down with Tracey Moorhead, president and CEO of the Visiting Nurse Associations of America (VNAA), which represents non-profit providers of home health, hospice, and palliative care services and has more than 150 agency members in communities across the country.

 

Intel: How has technology impacted the visiting nurse profession?

 

Moorhead: Technology has impacted the profession of home care providers, particularly, by expanding the reach of our various agencies. It allows our agencies to cover greater territories. I have a member in Iowa who covers 24,000 square miles and they utilize a variety of technologies to provide services to patients in communities that are located quite distantly from the agencies themselves. It has also impacted the individual providers by helping them communicate more quickly back to the home office and to the nurses making decisions about the course of care for the individual patients.

 

The devices that our members and their nurses are utilizing are increasingly tablet-based. We do have some agencies who are utilizing smartphones, but for the most part the applications, the forms and checklists that our nurses utilize in home based care are better suited for a tablet-based app.

 

Intel: What is the biggest challenge your members face?

 

Moorhead: One of the biggest challenges that we have in terms of better utilizing technology in the home based care industry is interoperability; not only of devices but also of platforms on the devices. An example is interoperability of electronic health records. Our individual agencies may be collaborating with two or more hospital systems, who may have two or more electronic health records in utilization. Combine that with different physician groups or practice models with different applications within each of those groups and you have a recipe for chaos in terms of interoperability and the rapid sharing and care coordination for these various patients out in the field. The challenges of interoperability are quite significant: they prevent effective handoffs, they cause great challenges in effective and rapid care coordination among providers, and they really continue to maintain this fragmentation of healthcare that we’ve seen.

 

Intel: What value are patients seeing with the integration of technology in care?

 

Moorhead: Patients and family caregivers have responded so positively to the integration of these new technologies and apps. Not only does technology allow for our nurses to communicate with family members and caregivers to help them understand how to best care for and support their loved ones, but it also allows the patients to have regular communication with their nurse care providers when they’re not in the home. Our patients are able to contact the home health agency or their nurse on days when there may not be a scheduled visit.

 

I visited a family in New Jersey with one of our agencies and they were so excited that it was visit day. When the nurse arrived not only was the wife there, but the two daughters, the daughter-in-law and also the son were there to greet the nurse and to talk with the nurse at length about the progress of the father and the challenges that they were having caring for him. That experience for me really brought home the person-centered, patient-centered, family-centered care that our patients provide and the technologies that were being utilized in that home not only when the nurse was there but the technologies that the nurse had provided with the family, including a tablet with an app to allow them to contact the home health agency, really made the family feel like they had the support that they needed to best care for their father and husband.

 

Intel: How are the next generation of home care providers adapting to technology?

 

Moorhead: The next generation of nurses, the younger nurses who are just entering the field and deciding to devote themselves to the home based care delivery system, are very accustomed to utilizing technologies, whether on their tablets or their mobile phones, and have integrated this quite rapidly into their care delivery models and processes. Many of them report to us that they feel it provides them a significant degree of freedom and support for the care delivery to their patients in the home.

 

Intel: Where will the home care profession be in five years from now?

 

Moorhead: I see significant change coming in our industry in the next five years. We are, right now, in the midst of a cataclysm of evolution for the home based care provider industry and I see only significant opportunities going forward. It’s certainly true that we have significant challenges, particularly on the regulatory and administrative burden side, but the opportunities in new care delivery models are particularly exciting for us. We see the quality improvement goals, the patient-centered goals and the cost reduction goals of care delivery models such as accountable care organizations and patient-centered medical homes as requiring the integration of home based care providers. Those organizations simply will not be able to achieve the outcomes or the quality improvement goals without moving care into the community and into the home. And so, I see a rapid expansion and increased valuation of home based care providers.

 

The technologies that we see implemented today will only continue to enhance the ability to care for these patients, to coordinate care and to communicate back to those nascent health delivery models, such as ACOs and PCMHs.

Read more >

IT Refresh is Over-Rated

IT-refresh_graham-palmer_linkedin-blog-post.jpg

 

I have to admit my headline is a little tongue in cheek but please hear me out. 

 

With the recent end of support for Windows Server 2003, the noise around refresh can be deafening.  I’d bet many people have already tuned out or have grown weary of hearing about it. They’ve listened to the incessant arguments about increased risks of security breaches, issues around compatibility, and estimates of costly repair bills should something go wrong.

 

While it’s true all of these things will leave a business vulnerable and less productive, it appears that’s just not enough for many companies to make a shift. 

 

Microsoft Canada estimates that 40% of its install base is running Windows Server 2003, illustrating Canadian companies’ conservatism when it comes to major changes to their infrastructure.  I’d suggest that in this current economic environment, being conservative in the adoption of new technology is leaving us vulnerable to an attack that could have a far reaching impact.

 

But let’s set that aside for the time being. You’ll be pleased to know I’m not going to talk about all the common reasons for refreshing your hardware and software including security, productivity, downtime, and support costs.  All these issues are important and valid, however I’ve no doubt we will hear a great deal about all of them in the weeks leading up to July 14th.

 

Instead I offer you a slightly different perspective.

 

In a previous post I wrote about a global marketplace that is getting more competitive.  Canadian companies are, and will be, facing off against larger enterprises located around the world. Competition is no longer from across the street or in the neighboring town. Canadian trade agreements have been signed or updated with numerous countries or economic regions globally including the European Union, China, Korea, and Chile. While these agreements signal opportunities for businesses to gain access to new markets, they also herald the risk of increased domestic competition.

 

To continue to succeed, businesses will have to find more efficient ways of doing whatever they need to get done.  This means pushing beyond their traditional comfort zone towards greater innovation.  This push will undoubtedly be enabled by advances in technology to support productivity gains.

 

As companies consider what it will take to succeed into the future, I believe you need to look at the people you have working for your company.  Are these the employees who can drive your company forward? Are they future leaders or innovators who can help you compete against global powerhouses?

 

Here’s where an important impetus to refresh your technology begins to take shape.

 

The employees of Generation Y and Generation C, also known as the connected generation, want to work for progressive, leading-edge companies and are shying away from large, stodgy traditional businesses or governments.  Being perceived as dated will limit your recruitment options as the top candidates chose the firms that are progressive in all areas of their business.

 

I’ve seen statistics indicating that 75% of the workforce will be made up of Generation Y workers by 2025. They are already sending ripples of change throughout corporate cultures and have started to cause a shift in employment expectations. These employees aren’t attracted to big blue chip firms, and in fact only 7% of millennials currently work for Fortune 500 companies.  Instead, they are attracted to the fun, dynamic, and flexible environments touted by start-ups.

 

The time has never been better to decide if you are going to continue to rinse and repeat, content to stick with the status quo, or if you are ready to embrace a shift that could take your business to the next level and at the same time position yourself to become more attractive to the next generation of employees.

 

So let’s talk a little about the opportunity here: In my experience from the UK, and I would argue it is similar in any market, new opportunities are realized first by small- and medium-sized businesses. They are more nimble and typically they are in a better position to make a significant change more quickly.  Since they are also closest to their customers and their local community, they can shift gears more rapidly to respond to changes they are seeing in their local market and benefit from offering solutions first that meet an emerging need.

 

SMBs are also in a strong position to navigate and overcome barriers to adoption of new technology since they don’t have that massive install base that requires a huge investment to change. In other words, they don’t have a mountain of technology to climb in order to deliver that completely new environment, but it takes vision and leadership willing to make a fundamental shift that will yield future dividends.

 

The stark truth is that millennials are attracted to and have already adopted the latest technology. They don’t want to take a step backward when they head into the workplace.  The technologies they will use and the environment they will be working in are already being factored into their decision about whether or not to accept a new position.

 

How do you think your workplace would be viewed by these future employees?  It goes without saying that we need to equip people to get their jobs done without worrying about the speed of the technology they’re using. No one wants delays caused by technology that is aging, slowing them down, and preventing them from doing what they need to get done.  Today’s employees are looking for more: more freedom, more flexibility, and more opportunities. But the drive to provide more is accelerating a parallel requirement for increased security to keep sensitive data safe.

 

I’d offer these final thoughts:  In addition to the security and productivity reasons, companies challenged to find talent should consider a PC refresh strategy as a tool to attract the best and brightest of the next generation.

 

Technology can be an enabler to fundamentally transform your workplace but you need a solid foundation on which to build.  A side benefit is that it will also help deliver the top talent to your door.

Read more >