ADVISOR DETAILS

RECENT BLOG POSTS

Intel Manageability Commander with Microsoft SCCM integration

I am excited to announce the release of Intel® Manageability Commander.  Built from the widely used MESHCommander application, Intel® Manageability Commander will make it significantly easier to take advantage of Intel® AMT out of band hardware management features provided on Intel® vPro™ platforms.

 

Intel® Manageability Commander can be installed both a stand alone application and on your Microsoft System Center Configuration Manager* (SCCM) server to integrate Out of Band management features in Microsoft SCCM current build 1511 and later.

 

Install_103.PNG

 

Top features of Intel® Manageability Commander:

  • Discover, diagnose and manage Intel® AMT configured PCs remotely
  • View and solve user PC and Operating System issues via integrated KVM remote control (Keyboard, Video, Mouse)
  • Integrate with Microsoft SCCM current build version 1511 and later
    • When SCCM deployments are configured to wake systems, the Intel® Manageability Commander SCCM wake service will perform an Intel® AMT secure power on
    • Collections in SCCM can be manually powered on using the Intel® Manageability Commander SCCM console extension
    • Intel® Manageability Commander can be launched on a per system basis from the SCCM right click system context menu to get access to all of the supported Intel® AMT features directly from SCCM

 

status2.png

 

KVM_blue.PNG

 

Additional Intel® Manageability Commander features include:

  • View and modify network settings of Intel® AMT.  If the PC has a wireless interface, users can add multiple wireless profiles to connect to Intel® AMT using the wireless interface
    • Configure Intel® AMT security features such as System Defense, Audit Log, and Access Control List
    • Display Intel® AMT events and filter events by keyword
    • Enable or disable Intel® AMT features on a configured system directly from Intel® Manageability Commander’s user interface. Any text that is shown in a blue color can be clicked on and used to change that setting

     

    To add Intel® AMT discovery and configuration task sequences, you can also install the Intel® SCS Add-on for Microsoft System Center Configuration Manager.

     

    For more information on Intel® Manageability Commander, go to http://www.intel.com/content/www/us/en/support/software/manageability-products/intel-manageability-commander.html

     

    Legal Notices

    © 2016 Intel Corporation. Intel, Intel® vPro™ and the Intel logo are trademarks of Intel Corporate in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.  Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer.

    Read more >

    Delivering Full Stack Video Analytics with Viscovery and Quanta

    analytics-image.jpg

    Online  video is a huge part of our connected world today. It’s a medium that we use  daily to share, communicate, learn, and of course, be entertained – and there  seems to be no limit to its growth.   Facebook is a great example – they are now getting an amazing 8 billion video views a day, more than double what  they saw 6 months earlier. According to a recent Cisco report, video  traffic will be 80% of all consumer Internet traffic in 2019, up from 64% in  2014, and mobile video  will increase 11X in the next 5 years. In China alone, the online video market  is expected to reach more than $17B by 2018 according to iresearch.

     

    As a  video and tech enthusiast, these developments are hugely exciting, but this  relentless deluge of video does indeed present some very real challenges (and  opportunities). Infrastructure challenges are obvious, given the need for  increased storage and compute to process, transcode and manipulate videos for end  user consumption.   However, there is another,  less straightforward, problem to overcome. How can viewers best navigate the  flood of online video content? And how can content providers and advertisers efficiently  and intelligently provide video content that is relevant (and useful) to consumers?

     

    This is  certainly a daunting task and something that we as humans are ill equipped to  handle.  Frankly, it is no wonder that many  companies are investigating the possibility of developing intelligent systems that  leverage machine learning and deep neural networks (DNN) to help automate these  tasks.

     

    With  this in mind, Intel, Quanta, and Viscovery came together to build a full stack  solution to this problem that leverages a deep learning based application from  Viscovery, the power and scalability of Intel® Xeon® processors, and Quanta’s  efficient platform designs.  We created a  turnkey solution specifically designed to solve the video content recognition  problem. At Intel, we recognize that it is critical to take a holistic view when tackling these types of  challenges and enable solutions that include everything from the silicon and  server hardware to the libraries and open source components all the way to the  end application. And of course, all of these ingredients must be optimized for  cloud scale deployments.  Below is a high  level view of the solution stack:

     

    In order  to tackle these problems at scale, libraries like Intel® Math Kernel Library and optimized open source components like Caffe* are tightly integrated into  Viscovery’s Deep Learning-based video content  recognition engine to take  full advantage of the performance of Intel® processors.  The result is a solution that seamlessly runs  across Intel® Xeon® and Intel® Xeon Phi™ processor-based platforms providing  the capability to train DNNs quickly and deploy at scale at an efficient total  cost of ownership. Below is an example of types of content that the Viscovery  application uses to train their DNNs.  As  you can see they’ve moved significantly beyond simple image and object  classification:

     

     

                                                                                                                                    

    Modality

    Target

    Facial

    Human/Animal

    Image

    Brand/Logo

    Text

    OCR in the wild

    Audio

    Speech/Music

    Motion

    Action/Video2Text

    Object

    Brand/Model

    Scene

    Location/Event

    viscovery-blog-image.png

     

    Of  course, the real proof of success is in the usage of this platform by end  customers.  Leaders in video content  delivery such as LeEco, YouKu, 8sian, Alimama (part of Alibaba) and many others  have already deployed solutions based on this stack.

     

    If  you’re at Computex this month, you can check out this video discovery service  in action running on our Intel Xeon and Intel Xeon Phi processors at Quanta’s  booth and during Intel’s keynote speech by Diane Bryant. And with any luck, as  video content recognition capabilities continue to advance, you’ll never find  yourself watching irrelevant or unwanted video content again.

    Read more >

    Visual Cloud & Remote Workstations in the Enterprise

    From video streaming to remote workstations, more content is being delivered via cloud computing every day. Data centers everywhere are dealing with a flood of video traffic, and many enterprises are also dealing with the computing demands of complex design applications and massive data sets that are used by employees and contractors scattered around the world.

     

    digitally-connected.jpg

    For design and content creation companies to remain competitive in today’s global business climate, they need to employ technologies that help technical employees and contractors collaborate to solve complex and interconnected design problems. Their designers, sales people, customers, contractors and others involved in the design process need access to design information and tools – anywhere and anytime, and the enterprise needs to safeguard its valuable intellectual property. The enterprise is therefore faced with finding ways to securely share data models and content over a widely distributed workforce without breaking the bank.

     

    Enabling the Global Design Workforce

     

    Securing access to complex and sensitive design models and content, quickly and easily providing access to a highly distributed workforce and ecosystem and providing an excellent user experience to that workforce are the collaboration challenges faced by the IT organizations at design firms.

     

    However, there’s a simple solution to these challenges. Cloud-hosted remote workstations allow engineers to use demanding 3D graphics applications from virtually anywhere in the world, with all data and applications hosted on servers based on Intel Xeon processors in a secure data center. Employees can safely collaborate with external contractors while avoiding sending designs from computer-to-computer and protecting enterprise intellectual property.

    Remote users can also work from the same data set, with no need for high-volume data transfers. This allows the enterprise to deliver fast and clear graphics running on a dense, cost-effective infrastructure.

     

    New Architectures and Ecosystems

     

    To support the demands of remote workstations, new solutions and partnerships are absolutely necessary.

     

    The Intel Xeon processor E3-1500 v5 product family offers hardware-enhanced integrated graphics capabilities that are optimized for remote application delivery workloads. These integrated graphics solutions cost-effectively accelerate video and enable secure, remote delivery of applications by combining the performance of Intel Xeon processors with integrated Iris Pro graphics.

     

    Intel-powered remote workstation solutions allow technical professionals and content creators to have greater access to key applications on their computing device(s) while securely collaborating with colleagues. For IT, these solutions provide centralized management, more provisioning control, and easier patching and updating of applications.

     

    The newly announced Intel Xeon processor E3-1500 v5 includes Intel Graphics Virtualization Technology (Intel GVT) to address multiple customer use cases. These include direct assignment of a given GPU’s capabilities to a single user; the ability to allow multiple local or remote virtual machines to share access to a GPU; and the ability to share a GPU’s resources through multiple concurrent users in a time-slice scenario.

     

    Productivity and Progress: Central to the Enterprise

     

    Organizations can increase the security of enterprise information by centrally hosting critical applications and data and avoiding delivering valuable visual content to contractors. The enterprise can also avoid provisioning powerful workstations to users who need infrequent access to graphic-intensive applications, such as salespeople who only occasionally need to provide design input.

     

    Intel works with a partner ecosystem to enhance the delivery and minimize the complexity of high-performance remote workstations within the enterprise. For example, the enterprise can turn to VMware Horizon 7 to deliver virtual or hosted desktops or Citrix XenApp and XenDesktop to deliver secure virtual apps and desktops.

     

    Adopting secure remote workstations allows the enterprise to deliver once out-of-reach workstation performance and visual content to designers, engineers, media creators, and other professionals. This enables major leaps in collaboration and productivity, further empowering each employee to drive progress for the enterprise.

    Read more >

    Rich Graphics for Virtualized Remote Applications — Powered by Citrix and Intel

    By James Hsu, Director of Technical Marketing at Citrix

     

    One of the great experiences in our industry is to see  products from different vendors—hardware and software—come together to solve  real customer problems. That’s what’s been happening with Citrix and Intel for  the last two years as we worked together to apply Intel Graphics  Virtualization Technology (Intel GVT) to the Citrix  XenServer virtualization platform. The result of that effort is Citrix  XenServer 7.0, which we are announcing at Citrix Synergy 2016 in Las Vegas. It’s  the first commercial hypervisor product to leverage Intel GVT-g, Intel’s virtual  graphics processing unit that can power multiple VMs with one physical GPU. As  well as announcing XenServer 7.0, Citrix is also announcing XenDesktop 7.9  offering industry-leading remote graphics delivery supported by Intel.  Let me tell you what that does for users  running graphics-intensive virtualized desktop applications, and then I’ll tell  you how we used Intel GVT-g to do it.

     

    hdx-technical-and-training-materials-header-1010x464.jpg

     

    Citrix XenApp and  XenDesktop lets you deliver virtualized desktop and applications hosted on  a server to remote workstations. Many desktop applications—like computer-aided  design and manufacturing apps and even accelerated Microsoft Office—require the  high-performance graphics capabilities of a graphics processing unit (GPU). In  XenDesktop 7.9 Citrix also added support for Intel Iris Pro graphics in the HDX  3D Pro remote display protocol.

     

    Earlier versions of XenServer enabled Intel GPU capabilities  on virtualized desktops in a pass-through mode that allocated the GPU to a  single workstation. Now, XenServer 7.0 expands our customers’ options by using  Intel  GVT-g to virtualize access to the Intel Iris Pro Graphics GPU integrated onto select  Intel Xeon processor E3  family products , allowing it to be shared by as many as seven  virtual workstations.

     

    With Intel GVT-g, each virtual desktop machine has its own copy  of Intel’s native graphics driver, and the hypervisor directly assigns the full  GPU resource to each virtual machine on a time-sliced basis. During its time  slice, each virtual machine gets a dedicated GPU, but the overall effect is that  a number of virtual machines share a single GPU. It’s an ideal solution in  applications where high-end graphics are required but shared access is  sufficient to meet needs. Using the Intel Xeon processor E3 family, small single-socket servers can pack a big  graphics punch. It’s an efficient, compact design that enables a new scale-out  approach to virtual application delivery. And it’s a cost-effective  alternative to high-end workstations and servers with add-on GPU cards.

     

    The advantages go beyond just cost efficiency. Providing  shared access by remote users to server-based data and applications enhances  worker productivity and improves collaboration. It also tightens security and  enables compliance, because critical intellectual property, financial data, and  customer information stays in the data center rather than drifting out to  individual workstations and mobile devices. And security is further enhanced,  because Intel Xeon processors contain Intel Trusted Execution Technology  (Intel TXT) to let you create trusted computing pools. Intel TXT attests to  the integrity and trust of the platform, assures nothing has been tampered with,  and verifies that the platform is running the authorized versions of firmware  and software when booting up.

     

    At Citrix, our goal is to provide our customers with the  computing experience they need to innovate and be productive—on a range of platforms  and usage models and in a way that enhances the security of their business. And  we want to give them the flexibility to access the computing resources they  need anywhere, any time, and from any device. Our collaboration with Intel has let  us deliver on that promise, and it lets us provide even more options for  platform choice and deployment configurations. It’s been a great experience for  us, and now it will enable a great experience for our mutual customers.

    Read more >

    Advantages of Telehealth: Better Patient Care

     

    The shift from fee-for-service to fee-for-performance is changing the conversation around patient care. Reducing readmissions is one benchmark for analyzing the quality of care, and more discussion is happening around bringing telehealth into the mix to improve this metric.

     

    Traditionally, when patients leave the clinical setting, interaction between the care team and the patient decreases. With telehealth and remote patient monitoring, technology allows the provider team to remain in contact with the patient to follow up on regiments and make sure instructions are followed. The result can be a shift in outcomes for the better.

     

    To learn more about telehealth, we sat down with Fadesola Adetosoye from Dell Healthcare Services, who says telehealth allows patients to overcome challenges, like transportation issues, to obtain better primary care and stay in touch with clinicians following discharge.

     

    Watch the video above and let us know what questions you have about telehealth? Is your organization using a telehealth strategy?

    Read more >

    Unleash the Power: Knights Landing Developer Platforms are here!

    Developers –  your HPC Ninja Platform is here! HPC developers  worldwide have begun to participate in the Developer Access Program (DAP) – a  bootstrap effort for early access to code development and optimization on the  next generation Intel Xeon Phi processor. A key part of the program is the  Ninja Developer Platform.

    hpc-graphic.jpg

     

    Several  supercomputing-class systems are currently powered by the Intel Xeon Phi processor  (code name Knights Landing (KNL))—a powerful many core, highly parallel  processor. KNL delivers  massive thread parallelism, data parallelism, and memory bandwidth with  improved single-thread performance and Intel Xeon processor  binary-compatibility in a standard CPU form factor.

     

    In anticipation of KNL’s general availability,  we, along with our partners, are bringing to market a developer access program,  which provides an ideal, platform for code developers. Colfax, a valued Intel  partner, is handling the program, which is already underway.

     

    The Ninja Platform

     

    Think of the  Ninja Developer Platform as a stand-alone box that has a single bootable next-generation  Intel Xeon Phi processor. Developers can start kicking the tires and getting a  feel for the processor’s capabilities. They can begin developing the highly  parallel codes needed to optimize existing and new applications.

     

    As part of  Intel’s Developer Access Program, the Ninja platform has everything you need in  the way of hardware, software, tools, education and support.  It comes fully configured with memory, local  storage, CentOS 7.2 and also includes a one-year license for Intel Parallel  Studio XE tools and libraries.  You can  get to work immediately whether you’re a developer experienced with previous  generations of Intel Xeon Phi coprocessors or if you are new to the Intel Xeon  Phi processor family.

     

    Colfax has  pulled out all the stops in designing the education and support resources  including white papers, webinars, and how-to and optimization guides. Currently  underway are a series of KNL webinars and hands-on workshops – see details at http://dap.xeonphi.com/#trg

     

    Here is a  quick look at the two platform options that are being offered by the Developer Access  Program – both are customizable to meet your application needs.

     

                 


    Pedestal    Platform

    Rack    Platform

    • Developer Edition of Intel Xeon Phi Processor: 16GB MCDRAM, 6         Channels of DDR4, AVX 512
    • MEMORY: 6x DIMM slots
    • EXPANSION: 2x PCIe 3.0 x16 (unavailable with KNL-F), 1x PCIe 3.0 x4 (in a x8 mechanical slot)
    • LAN: 2x Intel i350 Gigabit Ethernet
    • STORAGE: 8x SATA ports, 2x SATADOM support
    • POWER SUPPLY: 1x 750W 80 Plus Gold
    • CentOS 7.2
    • Intel Parallel Studio XE Professional Edition Named User 1-year license

      

        
    • 2U 4x Hot-Swap Nodes
    •   

    • Developer Edition of Intel Xeon Phi Processor: 16GB MCDRAM, 6         Channels of DDR4, AVX 512
    •   

    • MEMORY: 6x DIMM slots / Node
    •   

    • EXPANSION: Riser 1: 1x PCIe 3.0 x16, Riser 2: 1x PCIe Gen3 x 20         (x16 or x4) / Node
    •   

    • LAN: 2x Intel i210 Gigabit Ethernet / Node
    •   

    • STORAGE: 12x 3.5″ Hot-Swap Drives
    •   

    • POWER SUPPLY: 2x 2130W Common Redundant 80 Plus Platinum
    •   

    • CentOS 7.2
    •   

    • Intel Parallel Studio XE Cluster Edition Named User 1-year         license

      

     

    Given the richness of the technology and the  tools being offered along with the training and support resources, developers should  find the process of transitioning to the latest Intel Xeon Phi processor  greatly accelerated.

     

    The Ninja Development Platform is particularly  well suited to meet the needs of code developers in such disciplines as  academia, engineering, physics, big data analytics, modeling and simulation,  visualization and a wide variety of scientific applications.

     

    The platform  will cost ~$5,000 USD for the single node pedestal server with additional costs  for customization.  On the horizon is our  effort to take this program global with Colfax and partners. Stay tuned for  details in my next blog.

     

    You can pre-order  the Ninja Developer Platform now at http://www.xeonphideveloper.com.

    Read more >

    Enabling Anywhere, Anytime Design Collaboration with Intel Graphics Virtualization Technology

    Graphics virtualization and design collaboration took a step  forward this week with the announcement of support for Intel Graphics  Virtualization Technology-g (Intel® GVT-g) on the Citrix XenServer* platform.

     

    Intel GVT-g running on the current generation graphics-enabled  Intel Xeon processor E3 family, and future generations of Intel Xeon®  processors with integrated graphics capabilities, will enable up to seven Citrix  users to share a single GPU without significant performance penalties. This new  support for Intel GVT-g in the Citrix virtualization environment was unveiled  this week at the Citrix Synergy conference in Las Vegas.

     

    A little bit of background on the technology: With Intel  GVT-g, a virtual GPU instance is maintained for each virtual machine, with a  share of performance-critical resources directly assigned to each VM. Running a  native graphics driver inside a VM, without hypervisor intervention in  performance-critical paths, optimizes the end-user experience in terms of features,  performance and sharing capabilities.

     

    All of this means that multiple users who need to work with  and share design files can now collaborate more easily on the XenServer  integrated virtualization platform, while gaining the economies that come with  sharing a single system and benefiting from the security of working from a  trusted compute pool enabled by Intel  Trusted Execution Technology (Intel® TXT).

     

    Intel GVT-g is an ideal solution for users who need access  to GPU resources to work with graphically oriented applications but don’t  require a dedicated GPU system. These users might be anyone from sales reps and  product managers to engineers and component designers. With Intel GVT-g on the  Citrix virtualization platform, each user has access to separate OSs and apps  while sharing a single processor – a cost-effective solution that increases  platform flexibility.

     

    The back side of this story is one of close collaboration  among Intel, Citrix, and the Xen open source community to develop and refine a  software-based approach to virtualization in an Intel GPU and XenServer  environment. It took a lot of people working together to get us to this point.

     

    And now we’ve arrived at our destination. With the  combination of Intel GVT-g, Intel  Xeon processor-based servers with Intel Iris Pro Graphics, and Citrix  XenServer, anywhere, anytime design collaboration just a got a lot easier.

    For a closer look at Intel GVT-g, including a technical  demo, visit our Intel Graphics Virtualization  Technology site or visit our booth #870 at Citrix  Synergy 2016.

    Read more >

    Making New Server-Virtualization Capabilities a Reality

    One of the most rewarding aspects of my work at Intel is seeing the new capabilities built in to Intel silicon that are then brought to life on an ISV partner’s product. It is this synergy between Intel and partner technologies where I see the industry and customers really benefit.

     

    Two of the newer examples of this kind of synergy are made possible with Citrix XenServer 7.0—Supervisor Mode Access Prevention (SMAP) and Page Modification Logging (PML). Both capabilities are built in to the Intel Xeon processor E5 v4 family, but can only benefit customers when a server-virtualization platform is engineered to use them. Citrix XenServer 7.0 is one of  the first server-virtualization platforms to do that with SMAP and PML.

     

    Enhancing Security with Supervisor Mode Access Prevention (SMAP)

     

    SMAP is not new in and of itself. Intel introduced SMAP for Linux on 3rd generation Xeon processors, SMAP is new to virtualization though. Intel added SMAP code to the Citrix Xen hypervisor in Xen Project. Citrix then worked with the code in Xen, and XenServer 7.0 makes SMAP a reality for server virtualization.

     

    citrix-blog-graphic.jpg

    Figure 1:  SMAP prevents the hypervisor from accessing the guests’ memory space other than when needed for a specific function

     

    SMAP helps prevent malware from diverting operating-system access to malware-controlled user data, which helps enhance security in virtualized server environments. SMAP aligns with the Intel and Citrix partnership where Intel and Citrix regularly collaborate to help make a seamless, secure mobile-workspace experience a reality.

     

    Improving Performance with Page Modification Logging (PML)

     

    PML improves performance during live migrations between virtual server hosts. As with SMAP, PML capabilities are built in to the Intel Xeon processor E5 v4 family, and XenServer 7.0 is one of the first server-virtualization platforms to actually enable PML in a virtualized server environment.

     

    citrix-blog-graphic-2.jpg

    Figure 2:  With PML, CPU cycles previously used to track guest memory-page writes during live migration are available for guest use instead

     

    Read More

     

    I haven’t gone into detail on SMAP or PML or how they work. Instead, I invite you to read about them and how they add to the already strong XenServer virtualization platform and Intel Xeon processor E5 family in the Intel and Citrix solution brief, “New Capabilities with Citrix XenServer and the Intel Xeon Processor E5 v4 Family.” I also invite you to follow me and my growing #TechTim community on Twitter: @TimIntel.

    Read more >

    Intel Inside and Everywhere at Synergy16

    Las-Vegas-Strip-1036x691[1].jpgBy Steve Sieron, Senior Alliance Marketing Manager at CItrix

     

     

    Intel will be highly visible next week at Synergy as a Platinum Sponsor. They’ll be featuring a number of new solutions that showcase the broad technical, product and marketing partnership with Citrix across networking, cloud, security and graphics virtualization. And? There’ll be an array of innovative Intel-based endpoint devices running XenApp and XenDesktop across Win10, Linux and Chrome OS.

     

    You won’t want to miss SYN121 on Wednesday May 25 from 4:30-5:15pm PDT in Murano 3204 for “Mobilize your Design Workforce: Delivering Graphical Applications on Both Private and Public Clouds.” This informative panel, hosted by Jim Blakley, Intel GM Visual Cloud Computing, will feature graphics industry experts, including Thomas Poppelgaard, Jason Dacanay from Gensler, Adam Jull from IMSCAD. and Citrix own “Mr. HDX,” Derek Thorslund.

     

    Be sure to take advantage of Intel’s Ask the Experts Bar and daily tech talks, where you can network with a variety of industry experts. The tech talks will feature customers and industry experts along with Intel and Citrix product owners. Intel health care implementations will also be featured in customer presentations at the Citrix Booth Theatre from both LifeSpan and Allegro Pediatrics.

     

    Visit these Interactive Demos and More in Intel Booth #870

     

    Enhancing Netscaler Security and Performance with Intel Inside. Showcasing performance scaling and new security enhancements on Intel® Xeon® Processor based Netscaler MPX and SDX product families.

     

    Intel® Solid State Drives (SSD) Enable a Secure Client. New endpoint security, storage technologies and capabilities with Citrix core product solutions.

     

    Scaling XenDesktop with Atlantis USX and Intel SSD.  Featuring Atlantis USX as a storage layer with Intel SSDs for XenDesktop. Offering a robust performance architecture and high density with lower implementation costs and ongoing maintenance OPEX compared to traditional VDI Solutions.

     

    Intel® Graphics Virtualization on Citrix (Intel® GVT). Learn about the new Intel Xeon Processor E3 family with Intel® Iris™ Pro Graphics in the cloud and new graphics virtualization technologies and solutions powered by Citrix from leading OEM partners. Interact with ISV-certified rich and brilliant 3D apps on the Intel remote cloud and learn how integrated graphics offer a compelling alternative to add-in graphics cards. The technologies highlighted will include Intel GVT-d – direct deployment of Intel processor graphics running 3D apps and media as well as Intel GVT-g – shared deployment in a cloud-based environment, hosted remotely in a data center running Citrix on latest-gen Intel Xeon processor servers.

     

    Intel Ecosystem Enables Citrix Across Synergy16

     

    Of course, the broader Intel ecosystem will be on full display at Synergy, including the latest HP Moonshot m710 Series and Cisco M-Series offerings. These tools bring unmatched levels of price, performance and density in delivering graphics and rich apps to a wide range of professional users requiring access to apps with ever-increasing graphics capabilities. There will also be a broad array of Intel Xeon-based Netscalers running in the IBM Softlayer Cloud and across booths and learning labs throughout the event. Explore exciting Intel-based Storage solutions on Citrix with new offerings from partners such as Nutanix, Pure Storage and Atlantis. As always, Intel end points will be ubiquitous throughout Synergy and featured in many sponsor pavilions, including HPE, Google, Dell and Samsung.

     

    Beyond being a technology leader and strategic partner, Intel will be supplying Intel Arduino boards for the Simply Serve program at Synergy. Promoting STEM programs for Title 1 middle school students. A big thanks to Intel on behalf of both Citrix and the Southern Nevada United Way!

     

    Citrix is pleased to welcome Intel to Synergy 2016. We encourage all attendees to stop by Booth #870 to meet the Intel team, watch customer presentations at the Intel Theatre and interact with innovative technology demos. Don’t forget to pull up your Synergy Mobile App to mark your calendar for SYN121, the Industry Expert Graphics Panel on Wed May 25 at 4:30pm in Murano 3204.

    Read more >

    5 Questions for Mark Caulfield, Chief Scientist, Genomics England

    Mark Caulfield, FMedSci, is a chief scientist and board member at Genomics England, an organization which provides investment and leadership to increase genomic testing research and awareness. Caulfield is also the director of the William Harvey Research Institute and was elected to the Academy of Medical Sciences in 2008. His particular areas of research are Cardiovascular Genomics and Translational Cardiovascular Research and Pharmacology. We recently sat down with him to discuss genomic sequencing as well as insight into a current research project. Mark-Caulfield-11-use.jpg

     

    Intel: What is the most exciting project you’re working on right now?

     

    Caulfield: The 100,000 Genomes Project is a healthcare transformation program that reads through the entire DNA code using whole genome sequencing. That’s 3.3 billion letters that make you the individual you are. It gives insight into what talents you have as well as what makes you susceptible to disease. My research is focused on infectious disease and rare inherited diseases such as cancer. Technology can bring answers that are usable in the health system now across our 13 centers.

     

    When studying rare disease, the optimal unit is a mother, father and an affected offspring. The reason is that both parents allow the researcher to filter out rare variations that occur in the genetic code that are unrelated to the disease, focusing in on a precise group. This project will result in more specific diagnosis for patients, a better understanding of disease, biological insights which may pave the way for new therapies and a better understanding of the journey of patients with cancer, rare disease and infection.

     

    Intel: How does this project benefit patients?

     

    Caulfield: By building a picture of the entirety of the genome or as much as we can read today, which is about 97.6 percent of your genome, we have a more comprehensive picture and a far greater chance of deriving healthcare benefits for patients. Cancer is essentially a disease of disordered genome. With genomic sequencing, we can gain insights into what drove the tumor to occur in the first place, what drives its relapse, what drives its spread and other outcomes. Most importantly, we can understand what drives response to therapy. We already have good examples of where cancer genotyping is making a real difference to therapy for patients.

     

    Intel: What is the biggest hurdle?

     

    Caulfield: Informed consent is essential to the future application of the 100,000 genomes project. It’s very hard to guarantee, that you can absolutely secure data. I think it’s the responsibility of all medical professionals like myself in this age to be upfront about the risk to data access. Most patients understand these risks. We try and keep patient data as secure as is reasonably possible within the present technological bounds.

     

    Intel: What is crucial to the success of genomic sequencing?

     

    Caulfield: We need big data partners and people who know how to analyze a large amount of data. We also need commercial partners that will allow us to get new medicines to patients as quickly as possible. That partnership, if articulated properly, is well received by people. Once we have this established, we can make strides in gaining and keeping public and patient trust, which is crucial to the success of genomic sequencing.

     

    If you want public trust, you must fully inform patients about the plan. Ensure their medical professionals understand that plan and that patients are bought into a conversation. This allows the patients and the public to shape your work. Sometimes in medicine, we become a little remote from what the patient wants when in actuality, this is their money. It should be their program, not mine.

     

    Intel: What goal should researchers focus on?

     

    Caulfield: With this large amount of data comes the need to process it as quickly as possible in order to provide helpful results for both the patient and care team. Intel’s All in One Day initiative is an important goal because it accelerates the time from when a person actually enrolls in such a program to receiving a diagnostic answer.

     

    The goal is to get the turn-a-round as fast as possible. For example, if a patient has cancer, that person may have an operation where the cancer is removed. Then the patient would then need to heal. If chemotherapy were needed, it would be important to start that as quickly as possible. We have to use the best technology we have available so we can shrink the time from involvement to answer.

    Read more >

    All In One Day by 2020 – A Progress Check

     

    All In One Day by 2020 – the phrase encompasses our real ambition here at Intel to empower researchers to give clinicians the information they need to deliver a targeted treatment plan for patients in just one 24-hour period. I wanted to provide you with some insight into where we are today and what’s driving forward the journey to All In One Day by 2020.

     

    Genomics Code Optimization

     

    We have been working with industry-leader experts, and commercial and open source authors of key genomic codes for several years on code optimization to ensure that genome processing runs as fast as possible on Intel®-based systems and clusters. The result is a significant improvement on the speed of key genomic programs which will help get sequencing and processing down to minutes, for example:

     

    • Intel has sped up a key piece of the Haplotype Caller in GATK, the pairHMM kernel to be 970x faster for an overall 1.8x increase in the pipeline performance;
    • The acceleration of file compression for genomics files, e.g. BAM and SAM files by over 4x
    • The acceleration of Python using Intel’s Math Kernel Library (MKL) producing a 15x speedup on a 16-core Haswell CPU;
    • Finally, using the enhanced MKL, in conjunction with its Data Analytics Acceleration Library (DAAL), has enabled DAAL to be 100x faster than R for k-means clusters and 35x faster than Weka on Apriori.

     

    You can find out more about Intel’s work in code optimization at our dedicated Optimized Genomics Code webpage.

     

    Scalability for Success

     

    As we see an explosion in the volume of available data the importance of being able to scale a high performance computing system becomes ever more critical to accelerating success. We have put forth the Intel® Scalable System Framework to guide the market on the optimal construction of an HPC solution that is multi-purpose, expandable and scalable.

     

    Combining the Scalable System Framework with optimized life sciences codes has resulted in a new, more flexible, scalable, and performant architecture. This reduces the need for purpose-built systems and instead offers an architecture that can span a variety of diverse workloads while offering increased performance.

     

    Another key element of an architecture is the balance between three key factors: compute, storage, and fabric. And today we see the fruits of our work coming to life, for example, in a brilliant collaboration between TGen, Dell and Intel which optimized TGen’s  RNA-Seq pipeline from 7 days to under 4 hours. TGen are successfully operating FDA-approved clinical trials, balancing research and providing clinical treatment of pediatric oncology patients.

     

    The intersection of our code optimization efforts and our SSF effort have yielded two new products for genomics too, one from Dell and another from Qiagen.

     

    From a week to a day

     

    It’s useful, I think, to see just how far we’ve come in the last four years as we look ahead to the next four years to 2020. In 2012 it took a week to perform the informatics on a whole human in a cloud environment going from the raw sequence data to an annotated result. Today, the time for the informatics had decreased to just 1 day for whole genomes.

     

    With the Dell and Qiagen reference architectures that are based on optimized code and the Intel® Scalable System Framework, a throughput-based solution has been created. This means that when fully loaded these base systems will perform the informatics on ~50 whole genomes per day.

     

    However, it is important to note the genomes processed on these systems still take ~24 hours to run, but they are being processed in a highly parallel manner. If you use a staggered start time of ~30 minutes between samples, this results in a completed genome being produced approximately every 30 minutes. For the sequencing instrumentation, Illumina can process a 30x whole human genome in 27 hours using its “rapid-run mode”.

     

    So, in 2016, we can sequence a whole genome and do the informatics processing in just over 2 days (51 hours consisting of 27 hours of sequencing + 24 hours of informatics time), that’s just ~1 day longer than our ambition of All In One Day by 2020.

     

    Three final points to keep in mind:

     

    1. There are steps in the All In One Day process that are our outside of the sequencing and the informatics, such as the doctor’s visit, the sample preparation for sequencing, the genome interpretation and the dissemination of results to the patient. These steps will add additional time to the above 51 hours.
    2. The reference architectures are highly scalable meaning a larger system can do more genomes per day. 4 times the nodes produce 4 times throughput.
    3. There are enhancements still to be made. For example, streaming the output from the sequencer to the informatics cluster such that the informatics can be started before the sequencing is finished will further compress the total time towards our all-in-one-day goal.

     

    I’m confident our ambitions will be realized.

     

    Read more >

    Telemedicine Trends in Latin America

    Telemedicine is gaining increased attention worldwide as a solution for improving access to care, improving quality of care, and lowering costs.

     

    Much of Latin America faces a major challenge that could in part be addressed with telemedicine:  a shortage of providers, and large populations living in rural areas where access to physicians—particularly specialists—is lacking.

     

    In my multiple visits to Latin America over the past two years, it is clear that while most countries in the region have used telemedicine to varying extents for many years, scalability remains a major goal.

     

    Governments across Latin America are generally strong advocates of telemedicine, and are investing in the networks and infrastructure that will support this technology.

     

    Below I highlight ways in which countries throughout the region are using or intend to use telemedicine, and what trends we might observe in the years ahead.

     

    Brazil

    In Brazil, telemedicine today is used strictly for provider-to-provider consultation, as physicians are not legally allowed to consult with patients over videoconference.

     

    Telemedicine has been largely driven by the need to provide care virtually between specialists in urban centers to patients in remote areas, due to a lack of specialists in the rural areas.

     

    The Brazilian government has long supported the use of telemedicine to provide better access and treatment to remote areas. Since 2006, it has facilitated two public initiatives–the Brazilian National Telehealth Network Program (launched by the MOH) and the RUTE-Telemedicine University Network (launched by the Ministry of Science, Technology, and Innovation) both of which serve to deploy telemedicine across Brazil.

     

    One of the first major initiatives started in 2006 in Parintins, a city of 100,000 located in the middle of the Amazon. With no roads to or from the city, the goal was to use telemedicine to enable communication between physicians in Parintins and specialists in Sao Paulo. Parintins partnered with private technology companies, including Intel, to build the necessary infrastructure (e.g., WiMAX network). This telemedicine program continues to operate today, and has informed other telemedicine efforts including Brazil’s national telehealth program, Telessaude (http://www.telessaudebrasil.org.br/).

     

    Another major initiative in Brazil is to bring intensive care unit (ICU) care to rural areas. The Brazilian MOH initiated tele-ICU programs so that now many hospitals in different regions are connected to rural parts of the country. These tele-ICUs reduce the need to transport patients into a city for health conditions such as heart attacks, strokes, and sepsis. Physicians in urban areas are able to use PTZ cameras to visually inspect the patient, and collect and interpret vital signs in real-time. Cerner, in partnership with Brazilian companies Intensicare and IMFtec, has provided the technology and software for most of these virtual ICUs.

     

    Mexico, Chile, Peru, and Argentina

    In Mexico, the social security network provides healthcare to formal sector workers. The network is currently working with companies such as Lumed Health http://www.lumedhealth.com/ to expand telemedicine capabilities. In addition, telemedicine is being used between the U.S. and Mexico with health systems such as the Mayo Clinic and Massachusetts General conducting consultations with physicians in Mexico.

     

    In Chile, the Ministry of Health has implemented a “Digital Health Strategy.” Its primary goal is also to address provider shortages and to improve access to care in rural areas. There are currently several telemedicine projects and POCs underway in Chile.  AccuHealth (https://www.accuhealth.cl/), for example, is a Chilean company that provides tele-monitoring services specifically to bring home care to patients who suffer from chronic conditions. The company plans to expand to Mexico and Colombia in the near term.

     

    In Peru, the government is spearheading efforts to build a fiber optics network across the entire country (www.proinversion.gob.pe/RedDorsal/). This infrastructure will be used to better support telemedicine services.

    In Argentina, the government has worked with the MOH and the Ministry of Federal Planning, Public Investment and Services to promote telemedicine. This collaboration has culminated in the CyberHealth Project, which is focusing on the installation of fiber optics and upgrading hospitals to allow for videoconferencing. It aims to connect 325 healthcare institutions across the nation to enable remote consultations and sharing of expertise.

     

    The Future of Telemedicine in Latin America

    Telemedicine is being increasingly recognized as a solution to achieve more with less. In Latin America, it has great potential to address the fact that providers and health care resources are not distributed equally among the urban and rural populations.

     

    The future of telemedicine in the region is promising. Governments are investing in and taking active roles in digitizing their health systems (e.g., implementation of electronic medical records, improving interoperability) along with building the infrastructure required to support telemedicine. The Pan American Health Organization (PAHO) has convened a meeting of the MOH leaders from several Latin American countries to discuss strategic plans for e-Health across the region. This collaboration, where protocols, guidelines, and best practices can be shared, will be increasingly important.

     

    Intel Health & Life Sciences looks forward to continuing its partnerships with public and private entities across Latin America to continue these important efforts.

    Read more >

    Nurses Week 2016: Technology To Make Your Job Easier

    International Nurses Day is a time to say Thank You Nurses. Thank you for your hard work, thank you for your compassion and thank you for the endless care you give to patients. It’s this unwavering focus on patient care that we must keep in mind when developing and implementing technology for nurses both in the hospital and community. The most valuable technology we can give to nurses is that which is almost invisible to – yet improves – their workflow, simplifies complex tasks and enables them to deliver even better care – in essence, technology must make the job of a nurse easier. I want to take today, International Nurses Day, to highlight a couple of technologies which have the potential to deliver on all of the above.

     

    Nursing goes Digital

    I know from experience that the best decisions are made when a nurse has the most accurate and up-to-date information on a patient’s condition. And when that accurate information can be gathered and accessed in an intuitive and more natural interaction using technology it’s a win-win for nurses and patients.

     

    I’m excited by the potential offered by Intel’s RealSense 3D camera which can be found in a range of devices such as 2-in-1s, the likes of which are already being used by nurses to record vital signs and access EMRs. For example, imagine being able to accurately track all 22 joints of a hand to assist with post-operative treatment following hand surgery.

     

    For community nurses, mobility is key. Holding the most up-to-date information when visiting patients in the home ensures mistakes are kept to a minimum and all parties involved in the care of the patient, from community nurses to specialist clinician, can make evidence-based decisions. 2-in-1 devices help nurses to stay focused on the patient rather than reams of paperwork, while also helping patients better understand their condition and improving buy-in to treatment plans. The real benefits are in simplifying and speeding up those processes which ensures nurses deliver the best possible care.

     

    Big Data for Nurses

    When we think of Big Data it is all too easy to think just about genomics, but there are benefits which can clearly help nurses identify serious illness more quickly too. Take Cerner for example, who have developed an algorithm that monitors vital information fed in real-time from the EMR. The data is analysed on a real-time basis, which then identifies with a high degree of accuracy that a patient is either going to get, or already has, sepsis.

     

    Clearly, given the speedy nature with which drugs must be administered, this Big Data solution is helping nurses to simply save lives by identifying at-risk patients and getting them the treatment they so desperately need. Watch this video to find out more about how Intel and Cloudera allow Cerner to provide a technology platform which has helped save more than 2,700 lives.

     

    Intelligent Care

    The rise of the Internet of Things in the healthcare sector is seeing an increasing use of sensors to help simplify tasks for nurses. For example, if sensors can monitor not only a patient’s vital signs but also track movement such as frequency of the use of a toilet, it not only frees up a nurse’s time for other tasks but also begins to build an archive of data which can be used at both patient and population effort.

     

    In China the Intel Edison-based uSleepCare intelligent bed is able to record a patient’s vital signs such as rate and depth of breathing, heart-rate and HRV without the need for nurse intervention. There are positive implications for patient safety too, as sensors can track movements and identify when patients might fall out of bed, alerting nurses to the need for attention.

    And when I think of moving towards a model of distributed care, this type of intelligent medical device can help the sick and elderly be cared for in the home too. WiFi and, in the future, 5G technologies, combined with sensors can help deliver the right patient information to the right nurse at the right time.

     

    Investing in the Future

    Having highlighted two examples of how technology can help nurses do an even better job for patients I think it’s important to recognise that we must also support nurses in using new technology. Solutions must be intuitive and seamlessly fit into existing workflows, but I recognise that training is needed. And training on new technologies should happen right from the start of nursing school and be a fundamental part of ongoing professional development.

     

    While International Nurses Day is, of course, a time to reflect and say Thank You Nurses, I’m also excited about the future too.

     

    Read more >

    Tweet Chat Review: The Growth of Connected Care

    Last week I had the honor of moderating the weekly #HITsm (Health IT social media) chat on Twitter. This regular discussion about health IT issues is a wonderful forum for addressing what steps need to be taken to move healthcare technology forward on a number of fronts.

     

    The topic of my chat was The Growth of Connected Care, and focused on defining the terms, sharing trends and identifying successful characteristics of a connected care program. I enjoyed the banter and the great questions that came my way during the chat and learned quite a bit about what the climate is like for overcoming obstacles to adopting connected care.  You can see the transcript of the entire chat here.

     

    To recap the conversation, below are the questions that were asked during the chat and my brief answers.

    andychat.png

     

    Connected care is a broad term – what does it mean?

    Generally, connected care applies to leveraging technology to connect patients, providers, and caregivers. Increasingly, this is happening in real-time. Connected care extends care outside of the traditional hospital setting and moves healthcare from episodic events to more continuous care that is tailored specifically for the patient.

     

    What market trends are driving connected care?

    A few trends are driving connected care forward. First, new Internet of Things (IoT) technology (devices-datacenter) are making connected care possible for patients. Think about wearables and the massive amount of data that can be acquired that influences care; this is the cornerstone of connected care.

     

    Second, payment reform and payment models are changing from fee-for-service to value-based. As payment models change, patient retention becomes increasingly important for clinicians. This is the consumerization of healthcare, where the patient takes charge of their own health and the care is on a regular, on-going basis.

     

    Finally, healthcare technology investments in digital platforms have opened the opportunity to create and consume new data streams in real-time.

     

    What technologies are enabling connected care?

    For starters, big data technologies, both software and hardware, are enabling us to work with the high volume, variety, and velocity of connected care data. Wearables and sensors are also evolving, and newer devices are delivering more value in improved form factors.

     

    What are characteristics of a successful connected care program?

    Successful connected care programs have clear clinical and business goals, know the problems that need to be solved, have measurable outcomes and clear value propositions, and feature scalable architecture for data ingestion, storage, analysis, and visualization.

     

    Programs must be patient-centric and look holistically at both patient and care team touch points throughout the continuum of care. They also need a strategy for transforming data into actionable/comprehensible insights delivered at the right time, to the right person. This is often overlooked – insights for providers or patient instructions get lost in poor visualization. This is why the UI/UX aspect of connected care is so critical.

     

    Where is connected care headed, and what are some things to watch for?

    Expect larger connected care programs with employers, payers, and care providers to reach consumers and tie engagement to financial outcomes. It will be interesting to see how employees respond and how the employer/employee relationship is re-written to include health-related activities.

     

    Population health programs will go through a three step evolution of understanding, predicting, and then preventing (UPP). Step one is simply understanding what data is available and identifying/filling gaps. The second stage of program maturity involves using data to being predicting outcomes for specific populations. This stage involves iterating through models to improve specificity both for target outcomes and population boundaries.

     

    The third stage is using the predictions to implement real programs that prevent target outcomes from occurring. This stage will partially rely on human-centered care delivery, but it will also push the boundaries of virtual medicine in response to access and delivery constraints that inevitably arise.

     

    On the downside, large data breaches look inevitable in the future as more devices allow for more attack vectors. The big unknown is how this will impact the industry and consumers.

     

    What are some of the short- and long-term obstacles to adoption of connected care programs?

    The business models for connected care are still evolving. New payment and reimbursement pathways are needed to create growth. Sustainable, long-term patient engagement is a challenge. Hopefully, healthcare will continue to look to industries that have pioneered techniques for data-driven high-touch consumer engagement (consumer goods, SaaS internet companies, etc.) and apply those learnings to developing new strategies to engage patients. Finally, federal and state regulation must continue to evolve because connected care operates across traditional geographic boundaries and models of care delivery.

    Read more >

    Forging an Open Path for SDI Stack Innovation

    path-02.jpgIntel was founded on a deep commitment to innovation, especially open standards driven innovation, that results in  acceleration only seen when  whole ecosystems come together to deliver solutions.  Today’s investment in CoreOS is reflective of this commitment, as data centers face an inflection point with the delivery of software defined infrastructure (SDI).  As we have at many times in our industry’s history, we are all piecing together many technology alternatives to form an open, standard path for SDI stack delivery.  At Intel, we understand the value that OpenStack has brought to delivery of IaaS, but also see the additive value of containerized architectures found in many of the largest cloud providers today.  We view these two approaches as complimentary, and the integration and adoption of these are critical to broad proliferation of SDI.


    This is why we announced a technology collaboration with CoreOS and Mirantis earlier this year to integrate OpenStack and Kubernetes, enabling OpenStack to run as a containerized pod within a Kubernetes environment. Inherent in this collaboration is a strong commitment across all parties to contribute the results of this collaboration directly upstream so that both communities may benefit. The collaboration brings the broad workload support, and vendor capabilities of OpenStack and the application lifecycle management and automation of Kubernetes into a single solution that provides an efficient path to solving many of the issues gating OpenStack proliferation today – stack complexity and convoluted upgrade paths.  Best of all, this work is being driven in a fully open source environment reducing any risk of vendor lock in.

     

    Because software development and innovation like this is a critical part of Intel’s Cloud for All initiative, we tasked our best SDI engineers to work together with CoreOS to deliver the first ever live demonstration of OpenStack running as a service within Kubernetes at the OpenStack Summit.  To put this into perspective, our joint engineers were able to deliver a unified “Stackanetes” configuration in approximately three weeks’ time after our initial collaboration was announced. Three weeks is a short timeframe to deliver such a major demo, but highlights the power of using the right tools together. To say that this captured the attention of the OpenStack community would be an understatement, and we expect to integrate this workflow into the Foundation’s priorities moving forward.

     

    The next natural step in our advancement of the Kubernetes ecosystem was our investment in CoreOS that we announced today.  CoreOS was founded on a principle of delivering GIFEE, or “Google Infrastructure for Everyone Else”, and their Tectonic solution integrates Kubernetes with the CoreOS Linux platform. CoreOS’s Tectonic is an easy to consume Hyperscale SDI Stack. We’ve been working with CoreOS for more than a year on various software optimization efforts focused at optimization of Tectonic for underlying Intel Architecture features. Our collaboration on Kubernetes reflects a common viewpoint on the evolution of SDI software to support a wide range of cloud workloads that are efficient, open and highly scalable.  We’re pleased with this latest chapter in our collaboration and look forward to delivering more of our vision in the months ahead.

    Read more >

    Nurses Week 2016: When Will Avatars Join Nurses Week Celebrations?

    Nurses Week is a great opportunity to celebrate all of the fantastic work we do for patients. I often find myself pausing at this time of the year to appreciate just how different – and in most cases better – our working practices, processes and outcomes are compared to just 10 or so years ago. Technology has been a great enabler in improving the workflow of nurses today, but I wanted to share some thoughts on the future of nursing in this blog and how we might be welcoming avatars and the world of virtual reality to Nurses Week celebrations in the near future.

     

    Better Training, Overcoming Global Shortage of Nurses

    There are challenges ahead for the nursing community, driven by many of the same factors affecting the entire healthcare ecosystem, ranging from an increasingly ageing population to pressure on budgets. When I met with nurses from across Europe in Brussels earlier this year at Microsoft in Health’s Empowering Health event, two key themes really came to the fore:

    • First, there was a call for improved training for nurses to help them better understand and benefit from technologies such as 2 in 1 tablets and advanced Electronic Medical Record systems;
    • Second, there was a discussion around what technologies might help overcome the potential of a global shortage of nurses in the future. A 2015 World Health Organisation report stated that ‘a fundamental mismatch exists between supply and demand in both the global and national health labour markets, and this is likely to increase due to prevalent demographic, epidemiologic and macroeconomic trends.’


    Looking ahead I see a real opportunity to integrate avatars and virtual reality into the nursing environment which will not only train students to be better nurses but also deliver better patient care with improved workflows at the bedside

     

    Virtual Reality To Deliver Safe, Effective Teaching

    Training is a fundamental part of a nurse’s development, and that rings true for both those in nursing school and more experienced nurses learning new technologies and procedures. Virtual reality technology can play a major role in helping nurses to better deal with a range of scenarios and technologies.

     

    For example, if I want to teach a nurse how to perform a specific procedure using virtual reality, I’m able to present the trainee with an avatar on a screen that could be any combination of gender, height, weight and medical condition. And whilst the procedure is being undertaken I’m then able to trigger a wide range of responses from the avatar patient to help the nurse learn how to deal with different scenarios – all in a safe and controlled manner that can be monitored and assessed for post-session feedback.

     

    Similarly, if a nurse is required to understand how to use a new piece of technology to improve their workflow, such as working with an upgrade to an EMR system on a 2 in 1 tablet, virtual reality can help too by simulating these new systems. In a virtual setting nurses are not only able to familiarise themselves with new processes but can provide feedback on issues around workflow before they are launched into a live patient environment.

     

    If I think of how my training was delivered at nursing school there was plenty of ‘chalk and board’-style teaching and a lot of time spent in a classroom using limited resources such as manikins. Today, an infinite number of student nurses can learn remotely using virtual reality and avatar patients, reinforcing knowledge and improving workflows on a range of mobile devices. This is particularly useful too for countries where educators are in short supply but nursing demand may be high.


    Avatars Ask ‘How Are You Feeling Today’?

    A recurring question in my mind is how can we make better use of the fantastic expertise and knowledge of today’s nurses to continue to deliver great care to patients. In the face of a shortage of nurses we should explore how avatars on a bed-side screen or 2 in 1 device might be able to take away the burden of some of the more routine daily tasks such as asking patients if they have any unusual symptoms.

     

    Patient answers could be fed back into the EMR which would trigger either further questions from the avatar or, in more serious cases, an alert for intervention by a human nurse.

    When I talk to my peers within healthcare there are some obvious and real concerns about the lack of emotion delivered by avatars. The rise of chat bots has made for interesting news recently and I see this kind of artificial intelligence combined with a great avatar experience delivering something approaching human emotions such as sympathy for routine tasks. We should also recognise that an avatar can potentially speak an unlimited number of languages too, helping all patients get a better understanding of their condition.

     

    As nurses I hope our community embraces discussion and ideas around the use of virtual reality and avatars – I’ve talked through just a couple of scenarios where I see improvements to training and care delivery but I’d be interested to hear how you think they could help you do your job better. And perhaps one day in the near future, avatars will be celebrating Nurses Week with us too.

     

    Read more >

    How DCIM tools improve PUE, reduce costs and help mitigate your carbon footprint

    According to the National Resources Defense Council (NRDC), data  center electricity consumption is projected to increase to approximately 140  billion kilowatt-hours annually by 2020, the equivalent annual output of 50  power plants. The cost to American businesses? A tidy $13 billion annually.

     

    Make no mistake, many enterprises and data center providers are  striving to reduce their carbon footprint. Switch recently announced that, as  of the first of this year, all of its SUPERNAP data centers are powered by 100%  renewable energy through its new solar facilities operating in Nevada.   Across the pond, Apple is developing two new 100% renewable energy data centers  in Ireland and Denmark.  And Facebook just launched a massive new data  center in Lulea, a town located in a remote corner of northern Sweden, that  requires 70% less mechanical cooling capacity than the average data center  because of the cool climate.

     

    But what if your data center is located in Houston or Rio de  Janeiro? Fortunately there exists a viable solution to achieve improved Power  Usage Effectiveness (PUE), and reduce costs associated with cooling and power  while mitigating a facility’s carbon footprint. Data Center Infrastructure  Management (DCIM) are software and technology products that converge IT and  building facilities functions to provide engineers and administrators with a  holistic view of a data center’s performance to ensure that energy, equipment  and floor space are used as efficiently as possible.


    In large data centers, where electrical energy billing comprises  a large portion of the cost of operation, the insight these software platforms  provide into power and thermal management accrue directly to an organization’s  bottom line.


    In order to take appropriate actions, data center managers need  accurate intel concerning power consumption, thermals, airflow and utilization.  One wouldn’t think this is the realm of MS Excel spreadsheets and Stanley tape  measures. However, a recent study by Intel DCM and Redshift Research found that  four in 10 data center managers in 200 facilities surveyed in the U.S. and the  UK still rely on these Dark Age tools to initiate expansion or layout changes.


    The good news is that DCIM provides increased levels of  automated control that empowers data center managers to receive timely information  to manage capacity planning and allocations, as well as cooling efficiency. By  deploying thermal-management middleware, for example, improvements in airflow  management can reduce energy consumption by 40%. Data center managers can  also drive a stake through the problem of zombie servers by consolidating  servers to reduce energy consumption from 10% to 40%.


    Modern data centers maintain a stable operating environment for  servers by implementing stringent temperature controls, which, paradoxically,  also makes it possible to apply various energy-saving and eco-friendly measures  in a centralized manner. A DCIM system that offers simulations integrating  real-time monitoring information to allow for continuous improvements and  validation of cooling strategy and air handling choices can have a direct  impact on the bottom line.


    Somewhat counter-intuitively, raising internal temperatures in  data centers can save annually upwards of 100K per temperature degree  without degrading service levels or reducing hardware lifespan. And by  deploying various other innovative cooling technologies, facilities can expend  up to 95% less energy.


    Utilizing DCIM real-time data analysis tools, along with  maintaining an active server refresh schedule, can effectively combat runaway  energy consumption. The combination of processor improvement with feature rich  intuitive dashboards that recognize imbalances in cooling and identify  underutilized servers, can sometimes reveal a profligate energy consumer right  under an administrator’s nose.


    Replacing an older server with today’s advanced technology and  using DCIM to identify underutilized systems can reduce energy need by 30%.  Considering the four-year life expectancy of a server, this will save up to  $480. While that figure might not seem too significant, the numbers get  significant if you have thousands of servers.

    Read more >

    Intel at SAP SAPPHIRE NOW: It’s Time to Plan Your Schedule

    SAP SAPPHIRE NOW and ASUG (America’s SAP Users’ Group)  Annual Conference is coming to Orlando on May 17–19—and Intel will be there  too, adding to the festivities with keynote addresses, tech talks, demos and  plenty of presentations from our OEM and technology partners.


    SAPPHIRE is SAP’s premier annual event, with an anticipated 20,000  people in attendance and an additional 80,000 tuning in online. SAPPHIRE attracts  CIOs and line-of-business managers who want to meet with SAP experts and  industry partners to learn the latest in Internet of Things (IoT) technologies,  in-memory computing, and data center and cloud strategies.

     

    The conference starts off with a bang on Tuesday morning,  May 17 when Intel CEO Brian Krzanich joins SAP chief executive Bill McDermott  on stage for a discussion of the latest innovations across the industry. Be  sure to be in your seat as BK shares information about advances to the joint Intel-SAP  IoT platform, and our next-generation Intel processors. In addition, you won’t  want to miss news of innovations in memory technologies that promise both to boost  performance and cut cost of memory for cloud and data center platforms. BK will  also share highlights of Intel IT’s successful conversion to SAP HANA* to run  Intel’s internal financial and enterprise resource planning (ERP) & Supply  Chain Management (SCM) systems (for more information on this proof-of-concept  deployment, view  the solution brief).

     

    Intel is also showcasing two demos in the Intel Booth #625of  the joint SAP and Intel IoT platform, find out more:


    • The  Connected Worker: Industrial Wearables for Worker Safety (Demo in Intel  Booth #625 also highlighted in mini-session 10:30am-10:50am, Tues. May 17,  PS602 presented by Jeff Jackson). Learn about the Intel and SAP reference  platform for industrial safety and compliance, and experience how wearables can  help detect unsafe conditions and create automated alerts in real-time, for  both workers and supervisors.


    • Real-Time  Inventory Management (Demo in Intel  booth #625)Learn how to delight customers and minimize  out-of-stock-issues in this retail jeans store scenario that features SAP  Merchandising* applications and the Intel® Retail Sensor Platform to send  real-time alerts for inventory management and cycle count automation.

     

    A Rich History of  Collaboration


    Intel and SAP have worked together closely for over two  decades, with SAP software specifically engineered to take advantage of the  performance, reliability and security built into Intel processors. Today, the  rich co-engineering relationship is stronger than ever, with new joint IoT  solutions that extend analytical processing and security from the data center  to the network edge, and breakthrough business solutions that draw on the power  of SAP HANA*, the revolutionary in-memory database that’s optimized to run on  Intel® Xeon® processors. SAP HANA and Intel processors stand behind new  solutions such as SAP Business ByDesign*, a cloud-based ERP service that brings  powerful business management tools to the device of your choice; and the SAP  Digital Boardroom*, which draws on the power of SAP HANA and Intel processors  to provide C-Suite executives a real-time visualization of business performance  and reporting across the entire enterprise with SAP’s Digital Boardroom.

     

    Intel and SAP’s collaboration doesn’t end there: We also  share a rich ecosystem of OEM partners who offer over 600 computing appliances that  feature SAP software pre-integrated onto Intel-based platforms for simple, out-of-the-box  functionality. A dozen of our OEM partners, including VMware, HP, Dell, Cisco,  and SGI, will be at SAP SAPPHIRE. Stop by their booths to check out their  latest innovations, and join us at Intel booth #625 where we will host over 30  tech talks by Intel partner experts.

     

    Of particular interest are a series of in-booth  presentations by Dr. Matthieu-P. Schapranow, program manager in E-Health and  Life Sciences at SAP’s Hasso Plattner Institute. Schapranow’s presentations  (12:30pm on Tues. May 17 and Wed. May 18, and 11:30am on Thurs. May 19) address  the topic of analyzing genomes using  in-memory databases and the advent of real-time analysis of medical big data.

     

    Intel is once again a proud sponsor of the SAP HANA® Innovation Awards, which recognize  customers and enterprises who have found innovative ways to use SAP HANA to  drive business value. Kudos to each of the over 150 entrants who competed this  year, and a special congratulation in advance to the five finalists to be named  in a special ceremony on Monday evening.

     

    Stop by the Intel booth #625 to say hello, and watch me shoot man-on-the-street videos for viewing on Periscope.

     

    Follow me @TimIntel  and #TechTim for the latest news on Intel and SAP.

    Read more >

    Improve Your Healthcare IQ

    Healthcare is undergoing massive changes. As a result of these changes many of those that work in the healthcare industry are finding that they need new skills and knowledge. A great way to go about this is participating in a massive open online course (MOOC).

     

    The term MOOC was first used by Dave Cormier of the University of Prince Edward Island in 2008. MOOCs are online courses that are built for open and collaborative participation. MOOC courses are often delivered as a pre-recorded series of video lectures with corresponding assignments to test knowledge. Courses are typically self-paced which makes it easy to schedule around work and family commitments. Mobile applications are available for some platforms which makes learning on the go easy (and much more productive than gaming!). Several MOOC platforms have implemented paid certification programs that focus on in-demand skill sets like data science. In addition to the education, most MOOC platforms provide community forums which can be great ways to connect with other individuals around the world with a shared passion for the subject matter.

     

    A variety of healthcare related courses are available on various MOOC platforms. A useful tool for selecting courses across platforms is Mooc List. Three of the more common platforms that come up in healthcare related searches are Coursera, edX, and FutureLearn. Each of these platforms has a slightly different focus in terms of course content and geographic distribution of educators. Coursera seems to have the most diverse set of healthcare curriculum today, but interesting course can be found on all three. Below are some of the sample courses available:

     

    Coursera

    Interprofessional Healthcare Informatics

    We will explore perspectives of clinicians like dentists, physical therapists, nurses, and physicians in all sorts of practice settings worldwide. Emerging technologies, telehealth, gaming, simulations, and eScience are just some of the topics that we will consider.

     

    Big Data Analytics for Healthcare

    We introduce the characteristics and related analytic challenges on dealing with clinical data from electronic health records. Many of those insights come from medical informatics community and data mining/machine learning community. There are three thrusts in this course: Application, Algorithm and System

     

    edX

    Entrepreneurship and Healthcare in Emerging Economies

    Explore how entrepreneurship and innovation tackle complex health problems in emerging economies.

     

    Practical Improvement Science in Health Care: A Roadmap for Getting Results

    Course will provide learners with the valuable skills and simple, well-tested tools they need to translate promising innovations or evidence into practice. A group of expert faculty will explore a scientific approach to improvement — a practical, rigorous methodology that includes a theory of change, measurable aims, and iterative, incremental small tests of change to determine if improvement concepts can be implemented effectively in practice.

     

    FutureLearn

    Inside Cancer: How Genes Influence Cancer Development

    In this free online course, you’ll learn about the fundamental biological concepts that inform our current understanding of cancer development, the molecular genetics behind it and its spread within the body.

     

    Bioprinting: 3D Printing Body Parts

    This free online course tells the story of this revolution, introducing you to commonly used biomaterials, including metals, ceramics and polymers, and how bioprinting techniques, such as selective laser melting, hot-melt extrusion and inkjet printing, work. Through case studies – ranging from hip implants to facial transplants to lab-grown organs – we’ll answer questions such as: What is 3D printing and how did it come about? Is it really possible to print structures that incorporate both living and artificial components? How long before we can print whole body organs for transplants? What is possible right now, and what will be possible in 20 and 50 years’ time?

    So whatever your reason, take some time to participate in an MOOC. It’s a fantastic way to stimulate new ideas and connect with like-minded individuals around the world.

     

    What questions do you have?

    Read more >

    Designing Healthcare IoT Systems

    The “Internet of Things” (IoT) has exciting near-term prospects in healthcare.  But what does that mean, and how can we most efficiently realize its potential?

     

    Healthcare IoT can take many forms.  Here, we’re referring to sensors deployed onto or inside a human body, that send their data readings to the cloud, which then communicates processed data to clinicians for action.

     

    It sounds straightforward, especially if you’re a technologist, because most of the words in the previous sentence are technology words: “sensor,” “data,” “cloud,” “communicate,” and “process.”

     

    But notice that other word: “action.”  It’s the last word because it’s the system’s entire reason for being.  If you’re designing your IoT system, and you aren’t clear idea what the actions are, how well they work, and, crucially, how the data are tied to the actions, then pause.

     

    What’s Being Tried?

     

    Let’s take an example: the recently published BEAT-HF study of heart failure patients.  All patients got their usual care, but half were randomly selected to additionally get coaching telephone calls plus an IoT solution that acquired daily blood pressure, weight, and oxygen saturation – exactly the parameters cardiologists follow in their heart failure patients.

     

    Unfortunately, the trial showed no benefit of the IoT solution.  Compared to the control group, the IoT patients died just as often, and they came into the hospital just as often.  This is not the first trial to show such failures, and it is fortunate that BEAT-HF did not harm the subjects by wasting physician time and distracting them from interventions that could actually benefit patients.

     

    A Better Mouse-Trap

     

    But now let’s look at a different system, also aimed at heart failure patients.  Here, a small Bluetooth-enabled pressure sensor is placed into the pulmonary artery via catheter.  (Pressure in the pulmonary artery is a key indicator of heart failure.)  Once a day the patients lies quietly in bed, near a Bluetooth receiver, and the sensor’s measurements of pulmonary artery pressure are sent to the cloud, and then to the cardiologist’s office.

     

    In a randomized study of 550 patients, the patients who received the pressure sensor had their medications changed by the cardiologist 250% more times than the control group.  That is not a typo – 250% — a remarkable change in the “action” step. But did all that extra “action” help? Yes!  Patients with the pressure system experienced 43% fewer deaths, and 57% fewer heart failure hospital admissions.  The word “spectacular” underestimates this accomplishment, especially given the statistics that, among fee-for-service Medicare enrollees, heart failure is responsible for 39% of all deaths, and for 42% of all hospital admissions.

     

    Wrap-up

     

    If you are designing an IoT system for healthcare, what lessons can you draw?

     

    • (1) Sensor choice matters.  A lot. Try to obtain data from the core of the disease process, not peripheral or indirect indicators.
    • (2) Merely increasing the data collection frequency, as BEAT-HF tried, may not be beneficial. “Big data” is not a panacea.  Data quantity may not make up for only marginal improvements in data quality.
    • (3) Patient choice matters.  BEAT-HF failed in its general population of heart failure patients, but might have succeeded with certain subgroups of patients.  For example, patients having both heart failure and depression might disproportionally benefit from the Hawthorne effect (increased attention) that telemonitoring can provide.
    • (4) Test your system with a randomized trial.  It is increasingly clear that other study designs are unreliable when evaluating tele-health systems.

     

    Although technology terms may dominate the definition of a healthcare IoT system, the single clinical word dominates its success.

    Read more >