Recent Blog Posts
Frustration with electronic health record (EHR) systems notwithstanding, the data aggregation processes that have grown out of healthcare’s adoption of the electronic health record are now spawning analytical capabilities that were unthinkable just 15 years ago. By leveraging big data to track everything from patient recovery rates to hospital finances, healthcare organizations are capturing and storing data sets that are changing the way doctors, caregivers and payers tackle larger scale health issues.
It’s not just happening on the clinical side, either, where EHRs are extending real-time patient information to doctors and predictive analytics are helping physicians to better track and understand their patients’ medical conditions.
In Kentucky, for example, tech investments by the state’s largest provider systems are estimated at over $600 million, a number that doesn’t even reflect investments from two of the biggest local organizations, Baptist Health and University of Kentucky HealthCare. The data collected by these hospitals includes—and far exceeds—the EMR basics mandated under ARRA, according to an article in The Lane Report.
While the goal of improving quality of care is, of course, a key driver of such investments, so is the government mandate tying Medicare and Medicaid reimbursement to outcomes. According to a recent report from McKinsey & Company, more than 50 percent of doctors’ offices and almost 75 percent of hospitals nationwide are managing patient information electronically. So, it’s not surprising that big data is catching the attention of healthcare’s management teams.
By quantifying and analyzing an endless variety of metrics—including things like R&D, claims, costs, and insights gleaned from patients—the industry is refining its approach to both preventative care and treatment, and saving money in the process. A good example can be found in the analysis of data surrounding regression rates, which some hospitals are now using to stave off premature releases and, by extension, exorbitant penalties.
Others, such as Brigham and Women’s Hospital, already are applying algorithms to generate savings beyond readmissions, in areas that include: high-cost patients, triage, decompensation, adverse events, and treatment optimization.
While there’s room to debate the extent to which big data is improving patient outcomes—or the scope of savings attributable to big data initiatives given the associated system costs—the trend toward leveraging data for better outcomes and savings will only continue to grow as CIOs advance meaningful implementations of solutions, and major technology companies continue to expand the industry’s basket of options.
How is your healthcare organization applying big data to overcome challenges? Have the results proven worthwhile?
As a B2B journalist, John Farrell has covered healthcare IT since 1997 and is a sponsored correspondent for Intel Health & Life Sciences.
Read John’s other blog posts
Fundamental reforms are needed to our Nation’s immigration laws for Intel to be able to hire enough talented people to support our advanced manufacturing and R&D operations in the United States. Ultimately, this will require a legislative solution and we are … Read more >
The post Intel Statement on President’s Executive Order on Immigration appeared first on Policy@Intel.
While the holidays are filled with fun, excitement and favorite traditions, they are also a hectic and sometimes stressful time of the year. Amidst shopping, cooking and entertaining, we often feel as though we might benefit from an extra helper … Read more >
The post Tech on Turkey Day: Simplifying and Adding Flair to Your Holiday Celebration appeared first on Technology@Intel.
This week, we have a many updates and bug fixes to Meshcentral. Under the covers we made significant bug fixes, including to some fixes to quite high priority bugs. Thank you to everyone that keeps… Read more
SAP TechEd 2014 at Las Vegas was an exciting and enjoyable show, brimming with opportunities to learn about the latest innovations and advances in the SAP ecosystem. Intel had its own highlights, as I explain in this video overview of Intel’s key activities. These included the walk-on appearance of Shannon Poulin, vice president of Intel’s Data Center Group, during SAP President Steve Lucas’s executive keynote. Shannon did his best to upstage the shiny blue Ford Mustang that Steve gave away during the keynote, but that was a hard act to top. Curt Aubley, Intel Data Center Group’s vice president and CTO, took part in an executive summit with Nico Groh, SAP’s data center intelligence project owner, that addressed ongoing Intel and SAP engineering efforts to optimize SAP HANA* power and performance management on Intel® architecture.
I was at the conference filming man-on-the-street interviews with some of Intel’s visiting executives. I had a great conversation with Pauline Nist, general manager of Intel’s Enterprise Software Strategy, on the subject of Cloud: Public, Private, and Hybrid for the Enterprise, and the future of the in-memory data center. I also spoke to Curt Aubley about How Intel is Influencing the Ecosystem Data Center and how sensors and telemetry can provide real-time diagnostics on the health of your data center.
In the Intel booth, we also had the fun of launching our latest animation, Intel and SAP: The Perfect Team for Your Real-Time Business, a light-hearted look at the rich, long-standing alliance between SAP and Intel. In the video, the joint SAP HANA and Intel® Xeon® processor platform has the power of a space rocket—a bit of an exaggeration, perhaps. But SAP HANA is a mighty powerful in-memory database, designed from the ground up for Intel Xeon processors. Dozens of Intel engineers were involved in the development of SAP HANA, working directly with SAP to optimize SAP HANA for Intel architectures.
It’s not too late to catch some of the action from our booth! We filmed a number of our Intel Tech Talks, so click on these links to watch industry experts discussing the latest news and advances in the overlapping orbits of SAP and Intel.
- Learn about sensor data, the Internet of Things and SAP HANA in the cloud with Prakash Darji. SAP’s senior vice president and general manager of Platform-as-a-Service.
- Watch Chris Hallenbeck, global vice president of Database and Technology at SAP, as he discusses reinventing business through innovation and the breakthroughs made together by Intel and SAP that helped create SAP HANA.
- Learn about the big data opportunity and how SAP HANA running on Intel Xeon processors makes real-time analytics a reality with Jim Fister, lead strategist and director of Business Development for Intel’s Data Center Group.
- Watch Frank Ober, data center solution architect at Intel’s NV Memory Group, as he discusses the SAP HANA platform for real-time business and data center advances in parallelism, SSDs, and silicon photonics.
- Watch as Pete Nicoletti, Virtustream’s chief information security officer, and Dr. Peter Jaeger, senior vice president of Value Engineering at Virtustream, describe how running SAP HANA on Virtustream’s cloud platform is more secure with Intel Xeon processor-based technologies.
Follow me at @TimIntel and search #TechTim to get the latest on analytics and data center news and trends.
Let’s talk about Fellow travelers at SC14 – companies that Intel is committed to collaborating with in the HPC community. In addition to the end-user demos in the corporate booth, Intel took the opportunity to highlight a few more companies in the channel booth and on the Fellow Traveler tour.
Intel is hosting three different Fellow Traveler tours on Discovery, Innovation, and Vision. A tour guide leads a small group of SC14 attendees through the show floor to visit eight company booths (with a few call outs to additional Fellow Travelers along the way). Yes, you wear an audio headset to hear your tour guide. And yes, you follow a flag around the show floor. On our 30 minute journey around the floor, my Discovery tour visited (official stops are bolded):
- Supermicro: Green/power efficient supercomputer installation at the San Diego Supercomputer Center
- Cycle Computing: Simple and secure cloud HPC solutions
- ACE Computers: ACE builds customized HPC solutions, and customers include scientific research/national labs/large enterprises. The company’s systems handle everything from chemistry to auto racing and are powered by the Intel Xeon processor E5 v3. Fun fact, the company’s CEO is working on the next EPEAT standard for servers.
- Kitware: ParaView (co-developed by Los Alamos National Laboratory) is an open-source, multi-platform, extensible application designed for visualizing large data sets.
- NAG: A non-profit working on numerical analysis theory, they also take on private customers and have worked with Intel for decades on tuning algorithms for modern architecture. NAG’s code library is an industry standard.
- Colfax: Offering training for parallel programming (over 1,000 trained so far).
- Iceotope: Liquid cooling experts, their solutions offer better performance/watt than liquid and air cooling hybrid.
- Huawei: Offering servers, clusters (they’re Intel Cluster Ready certified) and Xeon Phi coprocessor solutions.
- Obsidian Strategics: Showcasing a high-density Lustre installation.
- AEON: Offering fast and tailored Lustre storage solutions in a variety of industries including research, scientific computing and entertainment; they are currently architecting a Lustre storage system for the San Diego Supercomputer Center.
- NetApp: Their booth highlighted NetApp’s storage and data management solutions. A current real-world deployment includes 55PB of NetApp E-Series storage that provides over 1TB/sec to a Lustre file system.
- Rave Computer: The company showcased the RT1251 flagship workstation, featuring dual Intel Xeon processor E5-2600 series with up to 36 cores and up to 90MB of combined cache. It can also make use of the Intel Xeon Phi co-processor for 3D modeling, visualization, simulation, CAD, CFD, numerical analytics, computational chemistry, computational finance, and digital content creation.
- RAID Inc: Demo included a SAN for use in big data, running the Intel Enterprise Edition of Lustre with OpenZFS support. RAID’s systems accelerate time to results while lowering costs.
- SGI: Showcased the SGI ICE X supercomputer, the sixth generation in the product line and the most powerful distributed memory system on the market today. It is powered by the Intel Xeon processor E5 v3 and includes warm water cooling technology.
- NCAR: Is answering the question, how do you refactor an entire climate code. NCAR, in collaboration with the University of Colorado at Boulder is an Intel Parallel Computer Center aiming to develop tools and knowledge to help with the performance improvements of CESM, WRF, and MPAS on Intel Xeon and Intel Xeon Phi processors.
Intel Booth – Fellow Traveler Tours depart from the front right counter
After turning in my headset, I decided to check out the Intel Channel Pavilion next to Intel’s corporate booth. The Channel Pavilion has multiple kiosks (so many that they switched halfway through the show), each showcasing a demo with Intel Xeon and/or Xeon Phi processors, and highlighting a number of products and technologies. Here’s a quick rundown:
- Aberdeen: Custom servers and storage featuring Intel Xeon processors
- Acme Micro: Solutions utilizing the Intel Xeon processor and Intel SSD PCIe cards
- Advanced Clustering Technologies: Clustered solutions in 2U of space
- AIC: Alternative storage hierarchy to achieve high bandwidth and low latency via Intel Xeon processors
- AMAX: Many core HPC solutions featuring Intel Xeon processor E5-2600 v3 and Intel Xeon Phi coprocessors
- ASA Computers: Einstein@Home uses an Intel Xeon processor based server to search for weak astrophysical signals from spinning neutron stars
- Atipa Technologies: Featuring servers, clustering solutions, workstations and parallel storage
- Ciara: The Orion HF 620-G3 featuring the Intel Xeon processor E5-2600 v3
- Colfax: Colfax Developer Training on efficient parallel programming for Xeon Phi coprocessors
- Exxact Corporation: Accelerating simulation code up to 3X with custom Intel Xeon Phi coprocessor solutions
- Koi Computers: Ultra Enterprise Class servers with the Intel Xeon processor E5-2600 v3 and a wide range of networking options
- Nor-Tech: Featuring a range of HPC clusters/configurations and integrated with Intel, ANSYS, Dassault, Simula, NICE and Altair
- One Stop Systems: The OSS 3U high density compute accelerator can utilize up to 16 Intel Xeon Phi coprocessors and connect to 1-4 servers
The Intel Channel Pavilion
Once completing the booth tours, I decided to head back to the Intel Parallel Computing Theater to listen to a few more presentations on how companies and organizations are putting these systems into action.
Joseph Lombardo, from the National Supercomputing Center for Energy and the Environment stopped by the theater to talk about the new data center they’ve recently put into action, as well as their use of a data center from Switch Communications. The NSCEE has a couple of challenges – massive computing needs (storage and compute power); time sensitive projects (those with governmental and environmental significance) and numerous and complex workloads. In their Alzheimer’s research, the NSCEE compares the genomes of Alzheimer’s patients with those of normal genomes. They worked with Altair and Intel on a system that reduces their runtime from 8 hours to 3 hours, while improving system manageability and extensibility.
Joseph Lombardo from the NSCEE
Then I listed in to Michael Klemm from Intel talking about offloading Python to the Intel Xeon Phi coprocessor. Python is a quick and high productivity language (packages include: iPython, Numpy/SciPy, and Pandas) that can help compose scientific applications. Michael talked through design principles for the pyMIC offload infrastructure: Simple usage, slim API, fast code and keep control in a programmer’s hand.
Michael Klemm from Intel
Wolfgang Gentzsch from UberCloud covered HPC for the Masses via cloud computing. Currently more than 90% of an engineer or scientist’s in-house HPC is completed via workstations and 5% via servers. Less than 1% is completed using HPC Clouds, which offers a ripe opportunity if challenges like security/privacy/trust, control of data (where and how is your data running), software licensing, and the transfer of heavy data can be resolved. There are some hefty benefits – pay per use, easily scaling resources up or down, low risk with a specific cloud provider – that may start to entice more users shortly. UberCloud has 19 providers and 50 products currently in their marketplace.
Wolfgang Gentzsch from UberCloud
The Large Hadron Collider is probably tops on my list of places to see before I die, so I was excited to see Niko Neufeld from LHCb CERN talk about their data acquisition/storage challenge. I know, yet another big data problem. But the LHC generates one petabyte of data EVERY DAY. Nikko talked through how they’re able to use some sophisticated filtering (via ASICS and FPGA) to get that down to storing 30PB a year, but that’s still an enormous challenge. The team at CERN is interested in looking at the Intel OmniPath Architecture to help them move data faster, and then integrating Intel Xeon + FPGA with Intel Xeon and Intel Xeon Phi processors to help them shave off the amount of data stored even more.
Niko Neufeld from LHCb CERN
And finally, the PUCC held matches 4 and 5 today, the last of the initial matches and the first of the playoffs. In the last regular match, Taji took on the Examen and, in a stunning last-second “make” run, the Examen took it by a score of 4763 to 2900. In the afternoon match, the Brilliant Dummies took on the Gaussian Elimination Squad (defending champs). It was a hard fought battle – for many of the questions both teams had answered before the multiple choice possibilities were shown to the audience. In the end, the Brilliant Dummies were able to eliminate the defending champions by a score of 5082 to 2082. Congratulations to the Brilliant Dummies, we’ll see you in the final on Thursday.
We’ll see the Brilliant Dummies in the PUCC finals on Thursday
Thursday, November 20, 2014
Dateline: New Orleans, LA, USA
This morning at 11:00AM (Central time, New Orleans, LA), the second semi-final match of the 2014 Parallel Universe Computing Challenge will take place at the Intel Parallel Theater (Booth 1315) as the Coding Illini team from NCSA and UIUC, faces off against the EXAMEN from Europe. Coding Illini earned its spot in is semi-final match by beating the team from Latin America (SC3), and the EXAMEN earned their semi-final slot by beating team Taiji from China.
The winner of this morning’s semi-final match will go on to play the Brilliant Dummies from Korea in the final competition match this afternoon at 1:30PM, live on stage from Intel’s Parallel Universe Theater.
The teams are playing for the grand prize of $26,000 to be donated to a charitable organization of their choice.
Don’t miss the excitement:
- Match #5 is scheduled at 11:00AM
- The Final Match is scheduled at 1:30PM
Packed crowd watching the PUCC
5 Questions for Dr. Sandhya Pruthi, Medical Director for Patient Experience, Breast Diagnostic Clinic, Mayo Clinic Rochester
Clinicians are on the front lines when it comes to using healthcare technology. To get a doctor’s perspective on health IT, we caught up with Dr. Sandhya Pruthi, medical director for patient experience, breast diagnostic clinic, at Mayo Clinic Rochester, for her thoughts on telemedicine and the work she has been undertaking with remote patients in Alaska.
Intel: How are you involved in virtual care?
Pruthi: I have a very personal interest in virtual care. I have been providing telemedicine care to women in Anchorage, Alaska, right here from my telemedicine clinic in Rochester, Minnesota. I have referrals from providers in Anchorage who ask me to meet their patients using virtual telemedicine. We call it our virtual breast clinic, and we’ve been offering the service twice a month for the past three years.
Intel: What services do you provide through telemedicine?
Pruthi: We know that in some remote parts of the country, it’s hard to get access to experts. What I’ve been able to provide remotely is medical counseling for women who are considered high risk for breast cancer. I remotely counsel them on breast cancer prevention and answer questions about genetic testing for breast cancer when there is a very strong family history. The beauty is that I get to see them and they get to see me, rather than just writing out a note to their provider and saying, “Here’s what I would recommend that the patient do.”
Intel: How have patients and providers in Alaska responded to telemedicine?
Pruthi: We did a survey and asked patients about their experience and whether they felt that they received the care they were expecting when they came to a virtual clinic. The result was 100 percent satisfaction by the patients. We also surveyed the providers and asked if their needs were met through the referral process. The results were that providers said they were very pleased and would recommend the service again to their patients.
Intel: Where would you like to see telemedicine go next?
Pruthi: The next level that I would love to see is the ability to go to the remote villages in the state of Alaska, where people have an even harder time coming to a medical center. I’d also like to be able to have a pre-visit with patients who may need to come in for treatment so we can better coordinate their care before they arrive.
Intel: When it comes to telemedicine, what keeps you up at night?
Pruthi: Thinking about how we can improve the patient experience. I really feel that for a patient who is dealing with an illness, the medical experience should wow them. It should be worthwhile to the patient and it should follow them on their entire journey—when they make their appointment, when they meet with their physician, when they have tests done in the lab, when they undergo procedures. Every step plays a role in how they feel when they go home. That’s what we call patient-centered care.
Intel® RealSense™ Blog Series: Gesture Control, Drones, and More with Developers Martin Fortsch and Thomas Endres
Intel® RealSense™ technology makes it possible for our digital worlds to interact with our physical, organic worlds in meaningful ways. Many of the projects that developers are creating step across… Read more
The first phase of the Make it Wearable Finalist presentations are about to begin! The stage is being set, lighting is adjusted, the DJ is getting ready to spin tunes, and we have a luminary panel of judges ready to … Read more >
This post was written by Aysegul Ildeniz, Vice President of the New Devices Group and general manager of Strategy and Business Development at Intel Corporation. I have worked in the technology sector for more than a decade in senior leadership … Read more >
The post MakeHers: Engaging Girls and Women in Technology through Making, Creating and Inventing appeared first on CSR@Intel.
SC14: Intel Previews Differentiated Storage Services in the Enterprise Edition for Lustre* Software.
Michael Mesnier, our guest blogger, is a Principal Engineer in Intel Labs With the explosion of big data and cloud computing, the ability to store large amounts of data efficiently has become more important than ever. This is especially challenging in the … Read more >
The post SC14: Intel Previews Differentiated Storage Services in the Enterprise Edition for Lustre* Software. appeared first on Intel Labs.
Apparently there’s a whole world that exists beyond the SC14 showcase floor…the technical sessions. Intel staffers have been presenting papers (on Lattice Quantum Chromodynamics and Recycled Error Bits), participating in panels (HPC Productivity or Performance) and delivering workshops (covering OpenMP and OpenCL) over the past few days, with a plethora still to come.
To get a flavor for the sessions, I sat in on the ACM Gordon Bell finalist presentation: Petascale High Order Dynamic Rupture Earthquake Simulations on Heterogeneous Supercomputers. It’s one of five papers in the running for the Gordon Bell award and was presented at the conference by Michael Bader from TUM. The team included scientists from TUM, LMU Munich, Leibniz Supercomputing Center, TACC, National University of Defense Technology, and Intel. Their paper details optimization of the seismic software SeisSol via Intel Xeon Phi coprocessor platforms, achieving impressive earthquake model complexity of the propagation of seismic waves. The hope is that we can use optimized software and supercomputing to understand the wave movement of earthquakes, eventually anticipating real-world consequences to help adequately prepare for and minimize after effects. The Gordon Bell prize will be announced on Thursday, so good luck to the team!
Michael Bader from TUM
From there I headed back to the Intel booth to see how the demos are helping to solve additional real-world problems. First up was the GEOS-5/University of Tennessee team, which deployed a workstation with two Intel Xeon processors E5 v3 and two Intel Xeon Phi coprocessors to run the VisIT app for visual compute analysis and rendering. GEOS-5 simulates climate variability on a wide range of time scales, from near-term to multi-century, helping scientists comprehend atmospheric transport patterns that affect climate change. A real climate model (on a workstation!) which could be used to predict something like the spread and concentration of radiation around the world.
Predicting Climate Change with GEOS-5
Next up, the Ayasdi demo on precision medicine – a data analytics platform running on the Intel Xeon processor E5 V3 and a cluster with Intel True Scale Fabric that is looking for similarities in data, rather than using specific queries as searches. The demo shows how the shape of data can be employed to find unknown insights in large and complex data sets, something like “usually three hours after this type of surgery there is a fluctuation in vitals across patients.” The goal is to combine new mathematical approaches (TDA) with big data to identify biomarkers, drug targets, and potential adverse effects to support more successful patient treatment.
Ayasdi Precision Medicine Demo
Since I’m usually on a plane every couple of weeks, I was excited to talk to the Onera team on how they’re using the elsA simulation software to streamline aerospace engineering. The simulation capabilities of elsA enable reductions in ground-based and in-flight testing requirements. The Onera team optimized elsA to run in a highly scalable environment of an Intel Xeon and Xeon Phi processor based cluster with Intel True Scale fabric and SSDs, allowing for large scale modeling of elsA.
Aerospace Design Demo from Onera
Up last, I headed over to the team at the Texas Advanced Computing Center to talk about their demo combining ray tracing (OSPRay) and computing power (Intel Xeon processor E5 v3) to run computational fluid dynamics simulations and assemble flow data from every pore in the rock in Florida’s Biscayne Bay. Understanding how the aquifer transports water and contaminants is critical to providing safe resources, but eventually the researchers hope to move the flow simulation to the human brain.
TACC Demo in Action
One of the areas in the Intel booth I’d yet to visit was the Community Hub, an area to socialize and collaborate on ideas that can help drive discoveries faster. Inside the Hub, Intel and various third parties are on-hand to collaborate and discuss technology directions, best known methods, future use cases, etc. of a wide variety of technologies and topics. Hopefully attendees will create, improve or expand their social network with respect to peers engaged in similar optimization and algorithm development.
One of the community discussions with the highest interest on Tuesday was led by Debra Goldfarb, the Senior Director of Strategy and Pathfinding Technical Computing at Intel. The Hub was packed for a session on encouraging Women in Science and Technology – the stats are pretty dismal and Intel is committed to changing that. The group brainstormed reasons for the gap and how we can begin to address it. A couple of resources for those interested in the topic: www.intel.com/girlsintech and www.womeninhpc.org.uk. Intel also attended in the “Women in HPC: Mentorship and Leadership” BOF and will participate in “Woman in HPC” panel on Friday.
Above and below: Women in Science and Technology Community Hub discussion lead by Debra Goldfarb
Women in HPC BOF
Community Hub discussions coming up on Wednesday include Fortran & Vectorization, OpenMP, MKL, Data Intensive HPC, Life Sciences and HPC, and HPC and the Oil and Gas industry.
At the other end of the booth, the Intel Parallel Universe Theater was hopping all day. I checked out a presentation from Eldon Walker of the Lerner Research Institute at the Cleveland Clinic who discussed their 1.2 petabyte mirrored storage system (DC and server room) and their 270 terabytes of Lustre storage which enables DNA sequence analysis, finite element analysis, natural language processing, image processing and computational fluid dynamics. Dr. Eng Lim Goh from SGI presented the company’s energy efficient supercomputers, innovative cooling systems, and SGI MineSet for machine learning. And Tim Cutts from Wellcome Sanger Trust made it through some audio and visual issues to present his topic on working with genomics and the Lustre file system and how they solved a couple of tricky issues (denial of service issue via samtools and performance issues with concurrent file access).
Eldon Walker, Lerner Research Institute
Dr. Eng Lim Goh, SGI
Tim Cutts, Wellcome Trust Sanger
And lastly, for those following along with the Intel Parallel Universe Computing Challenge – in match two, The Brilliant Dummies from Korea defeated the Linear Scalers from Argonne by a score of 5790 to 3588. And in match three, SC3 (Latin America) fell to the Coding Illini (NCSA and UIUC) with a score of 2359 to 5359, which means both the Brilliant Dummies and Coding Illini move on in the Challenge. Match 4 and 5 will be up on Wednesday. See you in booth 1315!
Intel® IoT Community Manager Stewart Christie (https://twitter.com/intel_stewart) recently winged his way to Nigeria for a series of hands-on workshops and trainings with a number of very promising… Read more
Liquid cooling for the Intel® Solid-State Drive Data Center P3700 Series for PCIe. Why we may need it?
Usually when I read the words liquid cooling in the press, I start thinking of enthusiast systems with blue LEDs around it, made for gamers, for fun and for the way they look. Thats how it happens to me, until HPC is mentioned. That is then a co… Read more
Go to previous blog:Android Apps for the Intel Platform Learning Series : Introduction to Embedded Systems
In my earlier blog, I noted that the new book “Android* Application Development for the… Read more
If you look down at your workspace right now and analyze the way it has changed in the past few decades, you’ll likely be amazed by the contrast. Technology has given us the capacity to eliminate waste and optimize our workplaces for productivity, but it has also fundamentally changed the way we work. Less ties to a physical desk in a physical workspace has led to an upswing in the mobile workforce. According to the “The State of Telework in the U.S.” — which is based on U.S. Census Bureau statistics — businesses saw a 61% increase in telecommuters between 2005 and 2009.
IT decision makers have witnessed this growth from the trenches, where they enable the business to grow through technological advancements. But there are several key questions IT leaders will face in the coming waves of virtualization…
- What type of work model should be used to manage knowledge workers?
- When workers are increasingly distributed globally at multiple physical locations, how do effective interpersonal relationships form and grow?
- How will technology and people considerations impact the locations where people come together?
- How can the office environment be configured to invoke optimum worker productivity?
- How will organizations source the best workers and cope with differing attitudes across a five-generation workforce?
Though there are a significant number of mobile workers today, the number is still small in comparison to what it will be one day. According to the “The State of Telework in the U.S.” 50 million U.S. employees work jobs that are telework compatible, but only 2.9 million consider home their primary place of work. This represents 2.3 percent of the workforce. Meaning the full impact of virtualization has yet to be realized.
Some are dubious as to whether the workplace will continue to move in a virtualized direction. Rawn Shah, director and social business architect at Rising Edge, recently wrote on Forbes, “We are only starting to understand what the future of work looks like. In my view, the imagined idea of entirely virtual organizations is similar to how we used to think of the future as full of flying cars and colonies in space. Reality is much more invested in hybrid in-office plus remote scenarios. Physical space is still a strong element of work that we need to keep track of, and understand better to learn how we truly collaborate.”
According to Tim Hansen in his white paper “The Future of Knowledge Work,” there are already several trends influencing the current workplace that will directly impact virtualization of the enterprise in the future:
- Defining employees on the cusp of transformation
- Dynamic, agile team structures will become the norm
- The location of work will vary widely
- Smart systems will emerge and collaborate with humans
- A second wave of consumerization is coming via services
The questions IT leaders are asking now can be answered by isolating these already-present factors driving virtualization.
Our offices are changing rapidly — don’t let your employees suffer through legacy work models. Recognizing the change swirling around you will help you strategize for the coming changes on the horizon.
I’m starting a compilation of known Edison and Galileo issues with possible workarounds. Thought I’d share. Feel free to add yours to the comments.
FOR GALILEO: start here… Read more
Up until now, Intel has been releasing its Manycore Platform Software Stack (Intel® MPSS) on a quarterly cadence, with each release being supported for 1 year from the date it was issued.
Beginning… Read more