ADVISOR DETAILS

RECENT BLOG POSTS

Better Patient Care Starts With Better Technology

Home healthcare practitioners need efficient, reliable access to patient information no matter where they go, so they need hardware solutions that meet their unique needs. Accessing critical patient information, patient file management, seamless multitasking and locating a patient’s residence, are daily tasks for mobile healthcare professionals. Mobile practitioners don’t have access to the same resources they would if they were working in a hospital, so the tools they use are that much more critical to accomplishing their workload. Fortunately, advances in mobile computing have created opportunities to bridge that gap.

 

An Evolved Tablet For Healthcare Providers Healthcare image.png


As tablets have evolved, they’ve become viable replacements for clunky laptops. Innovation in the mobile device industry has transformed these devices from media consumption platforms and calendar assistants into robust workhorses that run full-fledged operating systems. However, when it comes to meeting the needs of home healthcare providers, not all tablets are created equal.

                    

A recent Prowess Consulting comparison looked at two popular devices with regards to tasks commonly performed by home healthcare workers. The study compared an Apple® iPad Air™ and a Microsoft® Surface™ Pro 3 to determine which device offers a better experience for home healthcare providers, and ultimately, their patients.

 

Multitasking, Done Right

 

One of the biggest advantages to the Surface™ Pro 3 is its ability to let users multitask. For example, a healthcare worker can simultaneously load and display test results, charts, and prescription history via the device’s split-screen capabilities. A user trying to perform the same tasks on the iPad would find themselves running into the device’s limitations; there are no split-screen multitasking options on the iPad Air™.

 

The Surface™ Pro 3’s powerful multitasking abilities combined with the ability to natively run Microsoft Office gives home healthcare providers the ability to focus more time on patient care and less time on administrative tasks. Better user experience, workflow efficiency, file access speed, and split-screen multitasking all point to the Microsoft® Surface™ Pro 3 as the better platform for home healthcare providers.

 

For a full rundown of the Surface™ Pro 3’s benefits to home healthcare workers, click here.

Read more >

Talk Innovation and Pathfinding with Intel at SC14

Karl Solchenbach is the Director of IPAG Europe, including Intel’s Exascale Labs in EMEA

 

This year the annual HPC and supercomputer conference and exhibition will be back in New Orleans, returning to the city after a four-year absence. From Nov. 16-21, SC14 will host more than 10,000 participants, to exchange the newest results in HPC research, to meet their world-wide peers, and to gain information about new and innovative HPC products. For HPC vendors, SC14 is the biggest forum to present new HPC hardware, software and innovations, and a unique opportunity to meet their global customers.

 

As usual, Intel will have a large booth with many activities highlighting the pace of discovery and innovation including compelling demos showcasing topics like climate and environmental modeling and airflow simulations from end users, as well as interesting informal Collaboration Hub discussions with Intel and industry experts, and short theater presentations on a variety of topics surrounding code modernization. A schedule of Intel activities can be found here.

 

Intel‘s Innovation, Architecture and Pathfinding Group (IPAG), led by Curt Aubley (VP and CTO of the Data Center Group and GM of IPAG) will have a strong presence at SC14. This group is looking into the future of HPC and exascale computing, with a focus on low-power processors and interconnects, innovative SW concepts and various technology projects with the U.S. Government and in Europe. Come and meet IPAG engineers and architects to discuss recent developments:

  • IPAG will be holding a BOF session on PGAS APIs. While the PGAS model has been around for some time, widespread adoption of PGAS by HPC developers is light. Advances in PGAS APIs promise to significantly increase PGAS use, avoiding the effort and risk involved in adopting a new language. This BOF (Wed 5:30-7:00pm) gives a concise update on progress on PGAS communication APIs and presents recent experiences in porting applications to these interfaces.
  • One of IPAG’s collaborations in Europe with CERN concerns “Data Intensive HPC”, which is relevant in scenarios like the CERN’s Large Hadron Collider (LHC) or the Square Kilometer Array (SKA). Niko Neufeld from CERN will present details at the Intel theatre (Wed at 5:45). In addition we will host a “Community Hub” discussion at the Intel booth (Wed 10-12am) with Happy Sithole, one of the thought leaders of the SKA project. These are informal discussions, meant to generate interest and thought exchange.
  • Another example of IPAG’s engagements in Europe is the Dynamical Exascale Entry Platform (DEEP) project, funded by the EU 7th framework programme, www.deep-project.eu. The goal is to develop a novel, Exascale-enabling supercomputing platform. At SC14 DEEP will present its results at the joint booth of the European Exascale projects booth 1039. Also at booth 1039, project EXA2CT (EXascale Algorithms and Advanced Computational Techniques) will give a status update on modular open source proto-applications, with Intel IPAG as a key partner.
  • Shekhar Borkar (Intel Fellow and Director of Extreme-scale Technologies) will be sitting in on the Future of Memory Technology for Exascale and Beyond II panel on Wednesday at 3:30 in room 383-84-85. The panel will be discussing how memory technology needs to evolve to keep pace with compute technology in the coming exascale era.
  • The IPAG team is also participating in a BOF session on Thursday at 12:15 in room 294 on a future runtime standard for HPC exascale (ExaFLOPS) machines. This is the Open Community Runtime (OCR) work being developed as a new industry standard, supported by the US Dept. of Energy.

 

Stop by booth 1315 to engage with the IPAG (and the larger Intel HPC team) on any of these topics. We hope to see you in New Orleans!

Read more >

Super Compute is rising, host bus adaption (HBA’s) will start fading

The world of storage is tiered, and it will become more distinctly tiered in the years ahead as the ability to manage hot data will evolve onto the PCIe bus and away from the sub-optimal SAS and SATA buses designed for traditional disk platter based storage. Using the right bus for your hot tier is very important. Up till 2014 most implementations of SSD storage have been on SAS and SATA buses which are not designed for fast non-volatile memory (NVM). What’s been required is more standardization around the actual processor’s host bus, called PCIe. A generational shift towards PCIe from Intel is now evolving.

storagevision.png


Intel is evolving the world of PCIe and its extensibility across the necessary layers, so that PCIe can truly be a more appropriate storage bus for going wide with more devices. Thus blending network, storage, co-processors and graphics devices all via this host bus. The classic storage need for adaption from SAS or SATA back up towards PCIe and the processor will slowly fade as the server generations evolve in the years ahead. We’ll see Intel-backed standards, platform additions and PCIe bridges and switches, which will start the unstoppable evolutions of putting storage closer with much less storage latency to the processor.

 

Tomorrow’s super and parallel computing can only be made a reality with denser, more efficient compute power. NVM storage will play its part by being on the CPU’s host bus. This storage will be more power efficient and parallel. The future has started with Intel SSD for PCIe, come check us out at the Intel booth #1315 at SC 2014 and talk to Andrey Kudryavtsev, John Ryan or myself. We’ll be there live to show you demo’s, samples and explain the system shifts and the benefits of the new storage protocol standard, NVMe as well.

Read more >

SC14: The Analysis Challenge of the $1,000 Genome

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Mikael Flensborg. Director, Global Partner Relations at CLC bio, a Qiagen Company. During SC14, Mikael will be sharing his thoughts on genomic and cancer research in the Intel booth (#1315). He is scheduled in the Intel Community Hub on Tuesday, Nov. 18, at 3 p.m. and Wednesday, Nov. 19, at 3 p.m., plus in the Intel Theater on Tuesday at 2:30 p.m.

 

Eight months have now passed since Illumina announced the long expected arrival of the $1,000 genome with the launch of the HiSeq X Ten sequencing instrument, which is also denoted as a new era in High Throughput Sequencing with focus on a new wave of population-level genomic studies. MF-2010.png

 

In order to keep the costs down to the “magic” $1,000 level, it is required to have a full HiSeq X Ten installation plow through vast 18,000 full human genomes per year, which means a completion of each full run every 32 minutes. With focus on such a high volume, the next very important question arrives:

 

What does it take to keep up with such a high throughput on the data analysis side?

 

According to Illumina’s “HiSeq X Ten Lab Setup and Site Prep Guide (15050093 E)”, the requirements for data analysis are specified to be a compute cluster with 134 compute nodes (16 CPU cores @ 2.0 GHz, 128 GB of memory, 6 x 1 terabyte (TB) hard drives) based on an analysis pipeline consisting of the tools BWA+GATK.

 

At QIAGEN Bioinformatics we decided to take on the challenge of benchmarking this, based on a workflow (Trim, QC for sequencing reads, Read Mapping to Reference, Indels and Structural Variants, Local Re-alignment, Low Frequency Variant Detection, QC for Read Mapping) of tools on CLC Genomics Server  (http://www.clcbio.com/products/clc-genomics-server/) running on a compute cluster with Intel® Enterprise Edition for Lustre* filesystem, InfiniBand, Intel® Xeon® Processor E5-2697 v3 @ 2.60GHz, 14 CPU cores, 64GB of memory, and Intel® SSD DC S3500 Series 800GB.

 

We based our tests on a publicly available HiSeq X Ten dataset  and we have reached the conclusion that based on these specifications we can follow the pace of the instrument with a compute cluster of only 61 compute nodes.

 

Given our much lower compute node needs, these results can have a significant positive impact on the total cost of ownership of the compute infrastructure for a HiSeq X Ten customer, which includes hardware, cooling, space, power, and systems maintenance to name a few variable costs.

 

What questions do you have?

Read more >

Capital Summit – Bringing Together the Forces of Innovation

I just spent the past week at the Intel Capital Global Summit. It was an excellent event where companies interested in innovation, venture capitals, and startups met to network and discuss new trends. Overall, this experience served as proof that innovation is still alive and well around the world.

 

If you have seen any of my past blogs on the topic of innovation, you will know that I believe there are three pillars necessary for innovation:

 

  1. Commitment: It is important that innovation is championed through executive support and ultimately with an investment of funding and resources.
  2. Clarity: An understanding of which specific problems need to be solved and how to fail fast to eventually get to the solution is vital for innovation.
  3. Culture: The organization needs to be supported in the area of failure. It is through trial and error along with the eventual learnings that are derived from failure that encourages innovation.

 

It was exciting to see all three demonstrated very clearly at the Intel summit.

Ed.jpg

Innovation Starts with Executive Understanding…

 

Through a series of organized meet and greet sessions, I had the opportunity to talk with many companies at the event. It was incredible to see the level of clarity demonstrated by the CEOs and executives of some of these companies. Plans of development and go-to-market strategies were well defined and clear. Additionally, these company leaders displayed an exceptional understanding of what problems they’re working on and the details on how they’re solving them.

 

But in every one of these cases, there was a common belief that the real innovation begins once the customer gets a hold of new technology. This is the point at which true understanding and the collision of ideas can occur. The specific problems are discovered as customers bring additional information to the discussion that can help companies hone in on legitimately scalable solutions.

 

…And a Company Culture That Embraces Strategic Change

 

Throughout the event, companies also met with each other to discuss how technology can be used to enhance solutions and better address some of the real problems faced by customers. It was apparent from the discussions that all of the CEOs were passionate about solving customer problems with the technologies that they are using.

 

This concept of ideas coming together to enhance and evolve a solution is very well outlined in Stephen Johnson’s video on the “slow hunch.” Rare is the occasion when someone conceives a brilliant idea in the shower (think Doc Brown in “Back to the Future”). More common is the process of a great idea starting from a seed, growing through a wide range of interactions, and eventually developing into something that is key to individual or company success.

 

Interested in innovation and the world of venture capital? Consider the Intel Capital Global Summit for next year. It can prove to be a significant gateway to network with these innovative companies. See how they can help you and how you can help them.

 

See you there,


Ed

 

Follow me on Twitter at @EdLGoldman and use #ITCenter to continue the conversation.

Read more >

SC14: When to Launch an Internal HPC Cluster

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Eldon M. Walker, Ph.D., Director, Research Computing at Cleveland Clinic’s Lerner Research Institute. During SC14, Eldon will be sharing his thoughts on implementing a high performance computing cluster at the Intel booth (#1315) on Tuesday, Nov. 18, at 10:15 a.m. in the Intel Theater.


When data analyses grind to a halt due to insufficient processing capacity, scientists cannot be competitive. When we hit that wall at the Cleveland Clinic Lerner Research Institute, my team began consideration of the components of a solution, the cornerstone of which was a high performance computing (HPC) deployment.

 

In the past 20 years, the Cleveland Clinic Lerner Research Institute has progressed from a model of wet lab biomedical research that produced modest amounts of data to a scientific data acquisition and analysis environment that puts profound demands on information technology resources. This manifests as the need for the availability of two infrastructure components designed specifically to serve biomedical researchers operating on large amounts of unstructured data:

 

  1. A storage architecture capable of holding the data in a robust way
  2. Sufficient processing horsepower to enable the data analyses required by investigators

 

Deployment of these resources assumes the availability of:

 

  1. A data center capable of housing power and cooling hungry hardware
  2. Network resources capable of moving large amounts of data quickly

 

These components were available at the Cleveland Clinic in the form of a modern, tier 3 data center and ubiquitous 10 Gb / sec and 1 Gb / sec network service.

 

The storage problem was brought under control by way of 1.2 petabyte grid storage system in the data center that replicated to a second 1.2 petabyte system in the Lerner Research Institute server room facility. The ability to store and protect the data was the required first step in maintaining the fundamental capital (data) of our research enterprise.

 

It was equally clear to us that the type of analyses required to turn the data into scientific results had overrun the capacity of even high end desktop workstations and single unit servers of up to four processors. Analyses simply could not be run or would run too slowly to be practical. We had an immediate unmet need in several data processing scenarios:

 

  1. DNA Sequence analysis
    1. Whole genome sequence
      1. DNA methylation
    2. ChIP-seq data
      1. Protein – DNA interactions
    3. RNA-seq data
      1. Alternative RNA processing studies
  2. Finite Element Analysis
    1. Biomedical engineering modeling of the knee, ankle and shoulder
  3. Natural Language Processing
    1. Analysis of free text electronic health record notes

 

There was absolutely no question that an HPC cluster was the proper way to provide the necessary horsepower that would allow our investigators to be competitive in producing publishable, actionable scientific results. While a few processing needs could be met using offsite systems where we had collaborative arrangements, an internal resource was appropriate for several reasons:

 

  1. Some data analyses operated on huge datasets that were impractical to transport between locations.
  2. Some data must stay inside the security perimeter.
  3. Development of techniques and pipelines would depend on the help of outside systems administrators and change control processes that we found cumbersome; the sheer flexibility of an internal resource built with responsive industry partners was very compelling based on considerable experience attempting to leverage outside resources.
  4. Given that we had the data center, network and system administration resources, and given the modest price-point, commodity nature of much of the HPC hardware (as revealed by our due diligence process), the economics of obtaining an HPC cluster were practical.

 

Given the realities we faced and after a period of consultation with vendors, we embarked on a system design in collaboration with Dell and Intel. The definitive proof of concept derived from the initial roll out of our HPC solution is that we can run analyses that were impractical or impossible previously.

 

What questions do you have? Are you at the point of considering an internal HPC cluster?

Read more >

SC14: The Bleeding Edge of Medicine

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Karl D’Souza, senior user experience specialist at Dassault Systèmes Simulia Corp. Karl will be speaking about the Living Heart Project noted below during SC14 at the Intel booth (#1315) on Wednesday, Nov. 19, at 12:15 p.m. in the Intel Theater and 1 p.m. in the Intel Community Hub.


Computer Aided Engineering (CAE) has become pervasive in the design and manufacture of everything from jumbo jets to razor blades, transforming the product development process to produce more efficient, cost effective, safe and easy to use products. A central component of CAE is the ability to realistically simulate the physical behavior of a product in real world scenarios, which greatly facilitates understanding and innovation. karl2.jpg

 

Application of this advanced technology to healthcare has profound implications for society, promising to transform the practice of medicine from observation driven to understanding driven. However, lack of definitive models, processes and standards has limited its application, and development has remained fragmented in research organizations around the world.

 

In January of 2014, Dassault Systèmes took the first step to change this and launched the “Living Heart Project” as a translational initiative to partner with cardiologists, researchers, and device manufacturers to develop a definitive realistic simulation of the human heart. Through this accelerated approach, the first commercial model-centric, application-agnostic, multi-physical whole heart simulation has been produced.

 

Since cardiovascular disease is the number one cause of morbidity and mortality across the globe, Dassault Systèmes saw the Living Heart Project as the best way to address the problem. Although there is a plethora of medical devices, drugs, and interventions, physicians face the problem of determining which device, drug, or intervention to use on which patient. Often times to truly understand what is going on inside a patient invasive procedures are needed.

 

CAE and the Living Heart Project will enable cardiologists to take an image (MRI, CT, etc) of a patient’s heart and reconstruct it on a 3D model thereby creating a much more personalized form of healthcare. The doctor can see exactly what is happening in the patient’s heart and definitively make a more informed decision of how to treat that patient most effectively.

 

If you will be at SC14 next week, I invite you to join me when I present an overview of the project, the model, results, and implications for personalized healthcare. Come by the Intel booth (1315) on Wednesday, Nov. 19, for a presentation at 12:15 p.m. in the Intel Theater immediately followed by a Community Hub discussion at 1 p.m.

 

What questions do you have about computer aided engineering?

Read more >

Meet Me @ “The Hub” – Discover, Collaborate, and Accelerate at SC14

“The Hub” is a social invention created by Intel for you to socialize and collaborate on ideas that can help drive your discoveries faster. It is located in the Intel booth (1315) at SC14.

Parallelization and vectorization are just not that easy. It is even harder if you try to do it alone. At “The Hub,” you will have the opportunity to listen, learn and share what you and your peers have experienced. The goal is to help you create, improve or expand your social network with respect to peers engaged in similar optimization and algorithm development. Intel will be providing discussion leaders to get the conversation started on various topics including OpenMP, MKL, HPC, Lustre, and fabrics. We will also be holding discussions on the intersection of HPC and specific vertical segments (life sciences, oil and gas, etc.), as well as holding special events in the collaboration hub, including a book signing with James Reinders and Jim Jeffers and a discussion on women in the science and technology.

If you’re heading to the show, stop by and “Meet Us @ The Hub” in booth 1315 for a challenging and intellectual opportunity to talk with your peers about parallelization, vectorization and optimization.

To see the full list of Hub activities, times and topics, check out our schedule.

Read more >

Bringing the Yin and Yang to Supercomputing

To complete the representation of teams from across the globe, Professor An Hong of the University of Science and Technology of China (USTC) has put together a team comprised of Masters and PhD students and professors from four of China’s prestigious universities. We caught up with her in the midst of preparing for HPC China14, PAC14 Parallel challenges, and the Student Cluster Competition at SC14.

Q1: The growth of the supercomputing industry in China is certainly obvious from the Tianhe-2 and Tianhe-1 supercomputers which are currently ranked #1 and #14 on the Top 500 supercomputer list (http://www.top500.org/list/2014/06/). What impact has this had on computer science at USTC?
A1: With the growth of the supercomputing industry in China, it is apparent that computer science education and research, not only in USTC but also in other universities in China, can be furnished with abundant supercomputing resources. But, it is not to imply that the HPC education and research have achieved to top level. Ranked#1 on the TOP500 supercomputer list may be more money sense than technology progress. So, as professors in computer science in developing China, we should let the students know the value of higher computer science education to be pursued.

 

Q2: What is the meaning or significance of the team name you have chosen (“Taiji”)?
A2: In Chinese philosophy, yin & yang, which are often shortened to “yin-yang” or “yin yang”, are concepts used to describe how apparently opposite or contrary forces are actually complementary, interconnected and interdependent in the natural world, and how they give rise to each other as they interrelate to one another.
Taiji (☯) comes about through the balance of yin and yang. Their complementary forces that interact to form a dynamic system, is the same as the transformation and combination with each other between 0 and 1 that form a computer system.

 

Taiji’s symbol is composed of yin (black) and yang (white), “black (yin)” can denote “0”, “white (yang)” can denote “1”, then all things in the world can derive from “0” and “1”. The Taiji’s philosophy coming from ancient China about seven thousand years ago, may inspire us how to build a balanced computer system from bits to gates and high level beyond.

 

Q3: What are the names and titles of the other team members who will be participating in the PUCC?
A3:

 

Name

Title

Organization

Activity participation

AN Hong

Professor

University of Science and Technology of China

Team Captain,

the code optimization challenge

Liang Weihao

Master Student

University of Science and Technology of China

the code optimization challenge

Chen Junshi

PhD Student

University of Science and Technology of China

the code optimization challenge

Li Feng

PhD Student

University of Science and Technology of China

the code optimization challenge

Shi Xuanhua

Professor

Huazhong University of Science and Technology

the trivia challenge

Jin Hai

Professor

Huazhong University of Science and Technology

the trivia challenge

Lin Xinhua

Professor

Shanghai Jiao Tong University

the trivia challenge

Liang Yun

Professor

Peking University

the trivia challenge

 

Q4: Why is participating in the PUCC important to USTC?
A4: Most team’s members come from USTC. USTC’s mission has been to “focus on frontier areas of science and technology and educate top leaders in science and technology for China and the world”. Central to its strategy has been the combination of education and research, as well as the emphasis on quality rather than quantity. Led by the most renowned Chinese scientists of the time, USTC set up a series of programs creatively encompassing frontier research and development of new technology.

 

The PUCC is a great, interesting activity, not only giving us the opportunity to showcase Intel’s technology innovation but also to give expression to the attention for Chinese Children and Youth Science endeavors. So, we appreciate the value of the PUCC to our professors and students.

 

Q5: The Supercomputing conference is using the theme “HPC Matters”. Can you tell me why you think HPC matters?
A5: All the HPCers come together to solve some of the critical problems in the world that matter to everyone in the world.

 

Q6: How will your team prepare for the PUCC?
A6: We have just participated the Parallel Application Challenge 2014 (PAC2014) held in HPC China14, in Nov.6-8, Guangzhou, China, which is organized by China Computer Federation Technical Committee of HPC and Intel. This competition required teams to optimize a parallel application provided by the organizers on the Intel Xeon and Xeon Phi computing platforms. Through the competition, we deeply understand much of Intel’s innovation in manycore and multicore technology.

 

For more on the Intel Parallel Universe Computing Challenge, visit the home page.

Read more >

Don’t Miss HPC Luminary, Jack Dongarra, at Intel’s SC14 Booth in New Orleans

If you plan to attend SC14 in New Orleans, you’re in for a treat.

 

Jack Dongarra, one of the HPC community’s iconic speakers, will be holding forth at the Intel booth (#1315), Monday, November 17 at 7:10 pm, on one of his current favorite topics – MAGMA (Matrix Algebra on GPU and Multicore Architectures). His talk is titled, “The MAGMA Project: Numerical Linear Algebra for Accelerators.”

 

To say Jack has the chops to speak on the subject is a major understatement. In addition to being an enthusiastic, informative and entertaining speaker, Jack is a Distinguished Professor of Computer Science at the University of Tennessee (UTK) as well as the director of UTK’s Innovative Computing Laboratory (ICL). And that’s just for openers.

 

You can check out his extensive affiliations and contributions to the design of open source software packages and systems on his LinkedIn page. He has worked on everything from LINPACK, BLAS and MPI to the latest projects that he and his team are developing at ICL, which include PaRSEC, PLASMA, and, of course, MAGMA.

 

They all fall within his rather broad specialty area which includes numerical algorithms in linear algebra, parallel computing, the use of advanced computer architecture, programming methodology, and tools for parallel computers.

 

Jack was part of the team that developed LAPACK and ScaLAPACK. This same team is responsible for designing and implementing the collection of next generation linear algebra libraries that make up MAGMA.

Designed for heterogeneous coprocessor and GPU-based architectures, MAGMA re-implements the functions of LAPACK and the BLAS optimized for the hybrid platform. This allows computational scientists to effortlessly port any software components that rely on linear algebra.

 

One of the main payoffs for using MAGMA is that it allows you to enable applications that fully exploit the power of heterogeneous systems composed of multicore and many-core CPUs and coprocessors. MAGMA allows you to leverage today’s advanced computational capabilities to realize the fastest possible time to an accurate solution within given energy constraints.


A New Intel Parallel Computing Center

In September of this year, UTK’s Innovative Computing Laboratory (ICL) became the newest Intel Parallel Computing Center (IPCC). The objective of the ICL IPCC is the development and optimization of numerical linear algebra libraries and technologies for applications, while tackling current challenges in heterogeneous Intel Xeon Phi coprocessor-based high performance computing.

 

In collaboration with Intel’s MKL team, the IPCC at ICL will modernize the popular LAPACK and ScaLAPACK libraries to run efficiently on current and future manycore architectures, and will disseminate the developments through the open source MAGMA MIC library.

 

Beating the Bottlenecks

This is good news for the members of the HPC community who will shortly be gathering in force in New Orleans. As Dongarra will undoubtedly point out during his presentation, by combining the strengths of different architectures, MAGMA overcomes bottlenecks associated with just multicore or just GPUs, to significantly outperform corresponding packages for any of these homogeneous components taken separately.

 

By combining the strengths of the different architectures, MAGMA significantly outperforms corresponding packages for any of these homogenous components taken separately. MAGMA’s one-sided factorization outperforms state-of-the-art CPU libraries on high-end multi-socket, multicore nodes – for example, using up to 48 modern cores. The benefits for two-sided factorizations (the bases for eigenvalue problem and SVD solvers) are even greater: performance can exceed 10X the performance of systems with 48 modern CPU cores.

 

Last September, ICL also announced that MAGMA MIC 1.2 is now available. This release provides implementations for MAGMA’s one-sided (LU, QR, and Cholesky) and two-sided (Hessenberg, bi- and tridiagonal reductions) dense matrix factorizations, as well as linear and eigenproblem solver for Intel Xeon Phi coprocessors.

 

MAGMA has been developed in C for multi/manycore systems enhanced with coprocessors. Within a CPU node, MAGMA uses pthreads and MPI for inter-nodal communications. Included is the development of a MAGMA port in OpenCL, as well as a pragma-based port suitable for Intel MIC-based architectures.

 

Funding for the development is being provided by DOE and NSF, as well as by industry including Intel, NVIDIA, AMD, MathWorks and Microsoft Research.

 

We can’t second guess what Jack will have to say, but based on the overview on the MAGMA web site, certain themes are likely to surface.

 

For example, it’s evident that the design of future microprocessors and large HPC systems will be heterogeneous and hybrid in nature. They will rely on the integration of many-core CPU technology and special purpose hardware and accelerators and coprocessors like the Intel Xeon Phi coprocessor. And we’re not just talking about high end machines here – everything from laptops to supercomputers and massive clusters will be composed of a composite of heterogeneous components.

 

Jack and his crew at the University of Tennessee are playing a major role in making that happen.

 

So stop by the Intel booth (#1315) at 7:10 pm on Monday, November 17, and let Jack Dongarra weave you a tale of how HPC is moving into overdrive with the help of advanced aids like MAGMA.

 

You’ll be hearing HPC history in the making.


For a schedule of all Intel Collaboration Hub and Theater Presentations at SC14, visit this blog.

Read more >

Blueprint: SDN’s Impact on Data Center Power/Cooling Costs

This article originally appeared on Converge Digests Monday, October 13, 2014

 

intel-cloud-graphic.PNGThe growing interest in software-defined networking (SDN) is understandable. Compared to traditional static networking approaches, the inherent flexibility of SDN compliments highly virtualized systems and environments that can expand or contract in an efficient business oriented way. That said, flexibility is not the main driver behind SDN adoption. Early adopters and industry watchers cite cost as a primary motivation.

 

 

 

SDN certainly offers great potential for simplifying network configuration and management, and raising the overall level of automation. However, SDN will also introduce profound changes to the data center. Reconfiguring networks on the fly introduces fluid conditions within the data center.

 

 

How will the more dynamic infrastructures impact critical data center resources – power and cooling?

 

In the past, 20 to 40 percent of data center resources were typically idle at any given time and yet still drawing power and dissipating heat. As energy costs have risen over the years, data centers have had to pay more attention to this waste and look for ways to keep the utility bills within budget. For example, many data centers have bumped up the thermostat to save on cooling costs.

 

 

These types of easy fixes, however, quickly fall short in the data centers associated with highly dynamic infrastructures. As network configurations change, so do the workloads on the servers, and network optimization must therefore take into consideration the data center impact.

 

 

Modern energy management solutions equip data center managers to solve this problem. They make it possible to see the big picture for energy use in the data center, even in environments that are continuously changing.  Holistic in nature, the best-in-class solutions automate the real-time gathering of power levels throughout the data center as well as server inlet temperatures for fine-grained visibility of both energy and temperature. This information is provided by today’s data center equipment, and the energy management solutions make it possible to turn this information into cost-effective management practices.

 

 

The energy management solutions can also give IT intuitive, graphical views of both real-time and historical data. The visual maps make it easy to identify and understand the thermal zones and energy usage patterns for a row or group of racks within one or multiple data center sites.

 

 

Collecting and analyzing this information makes it possible to evolve very proactive practices for data center and infrastructure management. For example, hot spots can be identified early, before they damage equipment or disrupt services. Logged data can be used to optimize rack configurations and server provisioning in response to network changes or for capacity planning.

 

 

Some of the same solutions that automate monitoring can also introduce control features. Server power capping can be introduced to ensure that any workload shifts do not result in harmful power spikes. Power thresholds make it possible to identify and adjust conditions to extend the life of the infrastructure.

 

 

To control server performance and quality of service, advanced energy management solutions also make it possible to balance power and server processor operating frequencies. The combination of power capping and frequency adjustments gives data center managers the ability to intelligently control and automate the allocation of server assets within a dynamic environment.

 

 

Early deployments are validating the potential for SDN, but data center managers should take time to consider the indirect and direct impacts of this or any disruptive technology so that expectations can be set accordingly. SDN is just one trend that puts more pressure on IT to be able to do more with less.

 

 

Management expects to see costs go down; users expect to see 100% uptime for the services they need to do their jobs. More than ever, IT needs the right tools to oversee the resources they are being asked to deploy and configure more rapidly. They need to know the impacts of any change on the resource allocations within the data center.

 

 

IT teams planning for SDN must also consider the increasing regulations and availability restrictions relating to energy in various locations and regions. Some utility companies are already unable to meet the service levels required by some data centers, regardless of price. Over-provisioning can no longer be considered a practical safety net for new deployments.

 

 

Regular evaluations of the energy situation in the data center should be a standard practice for technology planning. Holistic energy management solutions give data center managers many affordable tools for those efforts. Today’s challenge is to accurately assess technology trends before any pilot testing begins, and leverage an energy management solution that can minimize the pain points of any new technology project such as SDN.

Read more >

Bringing Conflict-Free Technology to the Enterprise



In January 2014, Intel accomplished its goal to manufacture microprocessors that are DRC conflict free for tantalum, tin, tungsten, and gold.

 

The journey towards reimagining the supply chain is long and arduous; it’s a large-scale, long-term commitment that demands precise strategy. For us, it was an extensive five-year plan of collecting and analyzing data, building an overarching business goal, educating and empowering supply chain partners, and implementing changes guaranteed to add business value for years to come. But we committed ourselves to these efforts because of global impact and responsibility.  As a result, the rewards have outweighed the work by leaps and bounds.

 

Cutting Ties with Conflict Minerals

 

The Democratic Republic of Congo (DRC) is the epicenter of one of the most brutal wars of our time; since 1998, 5.4 million lives have been lost to the ongoing conflict, 50 percent of which were five-years old or younger. The economy of the DRC relies heavily on the mining sector, while the rest of the world relies heavily on the DRC’s diamonds, cobalt ore, and copper. The stark reality is that the war in the Eastern Congo has been fueled by the smuggling of coltan and cassiterite (ores of tantalum and tin, respectively). Meaning most of the electronic devices we interact with on a daily basis are likely powered by conflict minerals.

 

One of the main reasons most are dissuaded from pursuing an initiative of this scope is that the supply chain represents one of the most decentralized units in the business. Demanding accountability from a complex system is a sizeable endeavor. Intel represents one of the first enterprise tech companies to pursue conflict-free materials, but the movement is starting to gain traction in the greater tech community as customers demand more corporate transparency.

 

Getting the Enterprise Behind Fair Tech

 

For Bas van Abel, CEO of Fairphone, there’s already a sizeable consumer demand for fair technology, but there remains a distinct need to prove that a market for fair technology exists. Fairphone is a smartphone featuring an open design built with conflict-free minerals. The company also boasts fair wages and labor practices for the supply chain workforce. When Abel crowd-funded the first prototype, his goal was to pre-sell 5,000 phones; within three weeks, he had sold 10,000. It’s only a matter of time before the awareness gains foothold and the general public starts demanding conflict-free minerals.

 

Screen Shot 2014-11-07 at 10.28.24 AM.png

We chose to bring the conflict-free initiative to our supply chain because funding armed groups in the DRC was no longer an option. Our hope is that other enterprises will follow suit in analyzing their own supply chains. If you want to learn more about how we embraced innovation by examining our own corporate responsibility and redefining how we build our products, you can read the full brief here.

 

To continue the conversation, please follow us at @IntelITCenter or use #ITCenter.

Read more >

SC14: Life Sciences Research Not Just for Workstations Anymore

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Ari E. Berman, Ph.D., Director of Government Services and Principal Investigator at BioTeam, Inc. Ari will be sharing his thoughts on high performance infrastructure and high speed data transfer during SC14 at the Intel booth (#1315) on Wednesday, Nov. 19, at 2 p.m. in the Intel Community Hub and at 3 p.m. in the Intel Theater.


There is a ton of hype these days about Big Data, both in what the term actually means, and what the implications are for reaching the point of discovery in all that data.

 

The biggest issue right now is the computational infrastructure needed to get to that mythical Big Data discovery place everyone talks about. Personally, I hate the term Big Data. The term “big” is very subjective and in the eye of the beholder. It might mean 3PB (petabytes) of data to one person, or 10GB (gigabytes) to someone else. ariheadshot52014-sized.jpg

 

From my perspective, the thing that everyone is really talking about with Big Data is the ability to take the sum total of data that’s out there for any particular subject, pool it together, and perform a meta-analysis on it to more accurately create a model that can lead to some cool discovery that could change the way we understand some topic. Those meta-analyses are truly difficult and, when you’re talking about petascale data, require serious amounts of computational infrastructure that is tuned and optimized (also known as converged) for your data workflows. Without properly converged infrastructure, most people will spend all of their time just figuring out how to store and process the data, without ever reaching any conclusions.

 

Which brings us to life sciences. Until recently, life sciences and biomedical research could really be done using Excel and simple computational algorithms. Laboratory instrumentation really didn’t create that much data at a time, and it could be managed with simple, desktop-class computers and everyday computational methods. Sure, the occasional group was able to create enough data that required some mathematical modeling or advanced statistical analysis or even some HPC, and molecular simulations have always required a lot of computational power. But, in the last decade or so, the pace of advancement of laboratory equipment has left large swath of overwhelmed biomedical research scientists in the wake of the amount of data being produced.

 

The decreased cost and increased speed of laboratory equipment, such as next-generation sequencers (NGS) and high-throughput high-resolution imaging systems, has forced researchers to become very computationally savvy very quickly. It now takes rather sophisticated HPC resources, parallel storage systems, and ultra-high speed networks to process the analytics workflows in life sciences. And, to complicate matters, these newer laboratory techniques are paving the way towards the realization of personalized medicine, which carries the same computational burden combined with the tight and highly subjective federal restrictions surrounding the privacy of personal health information (PHI).  Overcoming these challenges has been difficult, but very innovative organizations have begun to do just that.

 

I thought it might be useful to very briefly discuss the three major trends we see having a positive effect on life sciences research:

 

1. Science DMZs: There is a rather new movement towards the implementation of specialized research-only networks that prioritize fast and efficient data flow over security (while still maintaining security), also known as the Science DMZ model (http://fasterdata.es.net). These implementations are making it easier for scientists to get around tight enterprise networking restrictions without blowing the security policies of their organizations so that scientists can move their data effectively without ******* off their compliance officers.


2. Hybrid Compute/Storage Models: There is a huge push to move towards cloud-based infrastructure, but organizations are realizing that too much persistent cloud infrastructure can be more costly in the long term than local compute. The answer is the implementation of small local compute infrastructures to handle the really hard problems and the persistent services, hybridized with public cloud infrastructures that are orchestrated to be automatically brought up when needed, and torn down when not needed; all managed by a software layer that sits in front of the backend systems. This model looks promising as the most cost-effective and flexible method that balances local hardware life-cycle issues with support personnel, as well as the dynamic needs of scientists.


3. Commodity HPC/Storage: The biggest trend in life sciences research is the push towards the use of low-cost, commodity, white box infrastructures for research needs. Life sciences has not reached the sophistication level that requires true capability supercomputing (for the most part), thus, well-engineered capacity systems built from white-box vendors provide very effective computational and storage platforms for scientists to use for their research. This approach carries a higher support burden for the organization because many of the systems don’t come pre-built or supported overall, and thus require in-house expertise that can be hard to find and expensive to retain. But, the cost balance of the support vs. the lifecycle management is worth it to most organizations.

 

Biomedical scientific research is the latest in the string of scientific disciplines that require very creative solutions to their data generation problems. We are at the stage now where most researchers spend a lot of their time just trying to figure out what to do with their data in the first place, rather than getting answers. However, I feel that the field is at an inflection point where discovery will start pouring out as the availability of very powerful commodity systems and reference architectures come to bear on the market. The key for life sciences HPC is the balance between effectiveness and affordability due to a significant lack of funding in the space right now, which is likely to get worse before it gets better. But, scientists are resourceful and persistent; they will usually find a way to discover because they are driven to improve the quality of life for humankind and to make personalized medicine a reality in the 21st century.

 

What questions about HPC do you have?

Read more >

Empowering Field Workers Through Mobility

The unique — often adverse — working conditions facing utility field workers require unique mobility solutions. Not only do workers in these roles spend the majority of their time on the road, their work often takes them to places where the weather and terrain is less than hospitable. Despite all of the challenges facing this large mobile workforce, new tablets and other mobile devices are increasing productivity and reducing downtime for workers.

 

Field workers need a device that supports them whether they’re on the road or in the office. A recent RCR Wireless guest blogger summed up the needs of utility field workers by comparing them to the “front lines” of an organization:

 

Q4-SSG-Image-2-1 (1).png

Field workers are at the front lines of customer service … and therefore need to be empowered to better serve customers. They require mobile applications that offer easier access to information that resides in corporate data centers.

 

Previously, this “easy access” to data centers was limited to service center offices and some mobile applications and devices. Now, however, advances in tablet technology enable workers to take a mobile office with them everywhere they go.


Tough Tablets for Tough Jobs


With numerous tablets and mobile PCs on the market, it’s difficult to determine which mobile solution provides the best experience for the unique working conditions of field workers. In order to move through their work, these users need a device that combines durability, connectivity, security, and speed.

 

Principled Technologies recently reviewed an Apple iPad Air, Samsung Galaxy Tab Pro 12.2, and a Motion Computing R12 to determine which tablet yields the most benefits for utility field workers. After comparing performance among the devices with regards to common scenarios field workers face on a daily basis, one tablet emerged as a clear favorite for this workforce.

 

While the iPad and Galaxy feature thin profiles and sleek frames, the Intel-powered Motion Computing R12 received an MIL-STD-801G impact resistance rating from the U.S. Military as well as international accreditation (IP-54 rating) for dust and water resistance. The device also hit the mark with its biometric security features and hot-swappable 8-hour battery.

 

Communication between utility workers and dispatching offices is often the key to a successful work day. Among the three tablets, the Motion Computing R12 was the only device able to handle a Skype call and open and edit an Excel document simultaneously. This kind of multi-tasking ability works seamlessly on this tablet because it runs Microsoft Windows 8.1 natively on a fast Intel processor and also boasts 8 GB of RAM (compared to 1 GB in the iPad and 3 GB in the Galaxy).

 

At the end of the day, having the right device can lead to more work orders completed and better working conditions for field workers. Empowering field workers with the right tools can remove many of the technical hurdles that they face and lead to increases in productivity and reduced inefficiencies.

 

To learn more about the Motion Computing R12, click here.

Read more >

Boosting Big Data Workflows for Big Results

When working with small data, it is relatively easy to manipulate, wrangle, and cope with all of the different steps in the data access, data processing, data mining, and data science workflow. All of the various steps become familiar and reproducible, often manually. These steps (and their sequences) are also relatively simple to adjust and extend. However, as the data collection becomes increasingly massive, distributed, and diverse, while also demanding more real-time response and action, the challenges become enormous: the challenge to extend, modify, reproduce, document, or do anything new within your data workflow. This is a serious problem, because data-driven workflows are the life and existence of big data professionals everywhere: data scientists, data analysts, and data engineers.


Workflows for Big Data Professionals

Data professionals perform all types of data functions in their workflow processes: archive, discover, access, visualize, mine, manipulate, fuse, integrate, transform, feed models, learn models, validate models, deploy models, etc. It is a dizzying day’s work. We start manually in our workflow development, identifying what needs to happen at each stage of the process, what data are needed, when they are needed, where data needs to be staged, what are the inputs and outputs, and more.  If we are really good, we can improve our efficiency in performing these workflows manually, but not substantially. A better path to success is to employ a workflow platform that is scalable (to larger data), extensible (to more tasks), more efficient (shorter time-to-solution), more effective (better solutions), adaptable (to different user skill levels and to different business requirements), comprehensive (providing a wide scope of functionality), and automated (to break the time barrier of manual workflow activities). The “Big Data Integration” graphic below from http://www.apervi.com/ identifies several of the business needs, data functions, and challenge areas associated with these big data workflow activities.

big_data_inforgaphic_Edit.jpg


All-in-one Data Workflow Platform

A workflow platform that performs a few of those data functions for a specific application is nothing new – you can find solutions that deliver workflows for business intelligence reporting, or analytic processing, or real-time monitoring, or exploratory data analysis, or for predictive analytic deployments. However, when you find a unified big data orchestration platform that can do all of those things – that brings all the rivers of data into one confluence (like the confluence of the Allegheny and Monongahela Rivers that merge to form the Ohio River in the eastern United States) – then you have a powerful enterprise-level big data orchestration capability for numerous applications, users, requirements, and data functions.  The good news is that there is a company that offers such a platform: Apervi is that company, and Conflux is that confluence.

 

Apervi is a big data integration development company. From Apervi’s comprehensive collection of product documentation, you learn about all of the features and benefits of their Conflux product.  For example, the system has several components: Designer, Monitor, Dashboard, Explorer, Scheduler, and Connector Pack. We highlight and describe each of these various components below:

 

    • The Conflux Designer is an intuitive HTML5 user interface for designing, building, and deploying workflows, using simple drag-and-drop interactivity. Workflows can be shared with other users across the business.
    • The Conflux Monitor keeps track of job progress, with key statistics available in real-time, from any device, any browser, anywhere.  Drilldown capabilities empower exploratory analysis of any job, enabling rapid response and troubleshooting.
    • The Conflux Dashboard provides rich visibility into KPIs and job stats, on a fully customizable screen that includes a variety user-configurable alert and notification widgets. The extensible dashboard framework can also integrate custom dashboard widgets.
    • The Conflux Explorer puts search, discovery, and navigation powers into the hands of the data scientist, enabling that functionality across multiple data sources simultaneously. A mapping editor allows the user to locate and extract the relevant, valuable, and interesting information nuggets within targeted data streams.
    • The Conflux Scheduler is a flexible, intuitive scheduling and execution tool, which is extensible and can be integrated with third party products.
    • The Conflux Connector Pact is perhaps the single most important piece of the workflow puzzle: it efficiently integrates and connects data that are streaming from many disparate heterogeneous sources. Apervi provides several prebuilt connectors for specific industry segments, such as Telecom, Healthcare, and Electronic Data Interchange (EDI).

    AperviConfluxDiagram.png


    Big Benefits from a Seamless Confluence of Data Workflow Functions

    For organizations who are trying to cope with big data and to manage complex big data workflows, a multi-functional user-oriented workflow platform like Apervi’s Conflux can be leveraged to boost results in several ways. These benefits include:

    • Reduce operational costs
    • Drive faster results, from data discovery to information-based decision-making
    • Accelerate development of data-based products across verticals and business functions
    • Manage integration effectively through monitoring and intelligent insights.

     

     

     

     

     

     

     



    For more information, Apervi provides detailed white papers, datasheets, product documentation, case studies, and infographics on their website at http://www.apervi.com/.

     

     

    Dr. Kirk Borne is a Data Scientist and Professor of Astrophysics and Computational Science in the George Mason University School of Physics, Astronomy, and Computational Sciences. He received his B.S. degree in physics from LSU and his Ph.D. in astronomy from the California Institute of Technology. He has been at Mason since 2003, where he teaches graduate and undergraduate courses in Data Science and advises many doctoral dissertation students in Data Science research projects. He focuses on achieving big discoveries from big data and promotes the use of data-centric experiences with big data in the STEM education pipeline at all levels. He promotes the “Borne Ultimatum” — data literacy for all!

     

    Connect with Kirk on LinkedIn.

    Follow Kirk on Twitter at @KirkDBorne.

    Read more of his blogs at http://rocketdatascience.org/

    Read more >

    SC14: HPC and Big Data in Healthcare and Life Sciences

    What better place to talk life sciences big data than the Big Easy? As temperatures are cooling down this month, things are heating up in New Orleans where Intel is hosting talks on life sciences and HPC next week at SC14. It’s all happening in the Intel Community Hub, Booth #1315, so swing on by and hear about these topics from industry thought leaders:

     

    Think big: delve deeper into the world’s biggest bioinformatics platform. Join us for a talk on the CLC bio enterprises platform, and learn how it integrates desktop interfaces with high performance cluster resources. We’ll also discuss hardware and explore the scalability requirements needed to keep pace with the Illumina HiSeq X-10 sequencer platform, and with a production cluster environment based on Intel® Xeon® processor E5-2600 V3. When: Nov. 18, 3-4 p.m.

     

    Special Guests:

    Lasse Lorenzen, Head of Platform & Infrastructure, Qiagen Bioinformatics;

    Shawn Prince, Field Application Scientist, Qiagen Bioinformatics;

    Mikael Flensborg, Director Global Partner Relations, Qiagen Bioinformatics

     

    Find out how HPC is pumping new life into the Living Heart Project. Simulating diseased states, and personalizing medical treatments, requires significant computing power. Join us for the latest updates on the Living Heart Project, and learn how creating realistic multiphysics models of human hearts can lead to groundbreaking approaches to both preventing and treating cardiovascular disease. When: Nov. 19, 1-2 p.m.

     

    Special Guest: Karl D’Souza, Business Development, SIMULIA Asia-Pacific

     

    Get in sync with scientific research data sharing and interoperability. In 1989, the quest for global scientific collaboration helped lead to the birth of what we now call the Internet. In this talk, Aspera and BioTeam will discuss where we are today with new advances in global scientific data collaboration. Join them for an open discussion exploring the newest offerings for high-speed data transfer across scientific research environments. When: Nov. 19, 2-3 p.m.

     

    Special Guests:

    Ari E. Berman, PhD, Director of Government Services and Principal Investigator, BioTeam;

    Aaron Gardner, Senior Scientific Consultant, BioTeam;

    Charles Shiflett, Software Engineer, Aspera

     

    Put cancer research into warp speed with new informatics technology. Take a peak under the hood of the world’s first comprehensive, user-friendly, and customizable cancer-focused informatics solution. The team from Qiagen Bioinformatics will lead a discussion on CLC Cancer Research Workbench, a new offering for the CLC Bio Cancer Genomics Research Platform. When: Nov. 19, 3-4 p.m.

     

    Special Guests:

    Shawn Prince, Field Application Scientist, Qiagen Bioinformatics;

    Mikael Flensborg, Director Global Partner Relations, Qiagen Bioinformatics

     

    You can see more Intel activities planned for SC14 here.

     

    What are you looking forward to seeing at SC14 next week?

    Read more >

    OpenStack Summit Underscores the Power of Collaboration

    OpenStack Summit Paris is in the books, and the 8,000+ attendees have scattered back home after an excellent week of collaboration.

     

    While I’ve written previously (on day one and on day two) about enterprise and telco uptake of OpenStack, the conference continues to be a source of broad collaboration across members of the OpenStack community. I was lucky enough to catch up with many experts from the community including Carmine Rimi from Workday, Kamesh Pemmaraju from Dell, and Krish Raghuram and David Brown from Intel on what it takes to drive this broad collaboration and what we can take as the challenges ahead for the community as OpenStack matures to address broad enterprise and telco requirements. A common theme from these chats was the need for continued dialogue from the user community on the core capabilities required for broader deployment and the continued focus on stabilizing code across frequent release cycles.

     

    One thing to look for as OpenStack matures is the relation between the core releases and value add features delivered by OpenStack suppliers and how these interface with custom enhancements delivered within a customer environment. One thing to be sure of, though, is that OpenStack development progress is accelerating unabated, and I would expect even more progress to be on display by the time we reach the next Summit in Vancouver BC in May. 

     

    To get more perspective on the show please check out video interviews with Carmine Rimi, Derek Sellin, and Mike Kadera. You can also view select presentations from the event.

    Read more >

    How are Sales Professionals Using Mobile Devices in Interactions with Clinicians?

    For the past three years, we have been tracking the effectiveness of sales professionals using mobile technology as their main means of information delivery to healthcare professionals (HCPs), and in particular to doctors. As previously mentioned, the use of mobile devices has been variable, with many sales professionals using them in the same way they were using paper materials.

     

    We have data to show that where mobile devices are used effectively the doctors rate the sales professionals’ performance higher across multiple key performance indicators (KPIs) than when using paper alone or no materials in support of their key messages. Not only do we see that the mobile device enhances the delivery of information, there is also increasing evidence that using mobile technology increases the likelihood of altering HCP behaviors.

     

    Most of the pharmaceutical companies we have tracked still use a combination of paper and mobile devices. We have seen the best and most efficient use of the mobile device when the sales professional is able to use it to open the call and then navigate to the most appropriate information for that particular HCP. We have data on a number of specialist sales teams indicating that in calls lasting less than five minutes no supporting materials, including mobile devices, were used in any interaction with an HCP.

     

    Another advantage of mobile devices is at closing a call by being able to immediately email any supporting documents directly to the HCP. Our extensive research with HCPs shows they expect that ability when mobile devices are used; and when offered, a very positive impression is made.

     

    Additionally, the opportunity for the HCP to order and sign for samples at the time of the interaction is, in the eyes of the busy HCP, critical. The positive comments we have received from many HCPs on sales professionals’ use of mobile devices are indicative of the acceptance of this technology, enhancement of the experience, and leads to changes in behavior.

     

    When asked specifically how the mobile device would be best used as a means of information delivery, we received the following advice and comments. (2013 Data from over 1500 HCPs representing 15 specialties):

     

    • The mobile device is best used for short 1 minute presentations, focusing on main points
    • The mobile device should be used to display medical information in a structured format to save time
    • The mobile device is the ideal tool for one-on-one education
    • The mobile device should be used as a visual aid to get a point across or to educate
    • The mobile device should be used to show videos pertinent to a detail, such as mechanism of action of a drug or how to administer a medicine
    • The mobile device should be used to drill down quickly on topics of special interest, such as dosing in renal failure or drug interactions
    • The mobile device should be used for multimedia or interactive presentations

     

    Verbatim Comments

    • “Chance for more information through easier links (as opposed to rummaging through a bag of papers)”
    • “Easier for sales professional to present information, less waste of my time”
    • “It does not leave large volumes of materials behind at our office. Also requires the sales professional to be more to the point with a few slides as opposed to a lengthy paper document.”
    • “Easy to use and navigate the information, easy to sign for samples”
    • “Demonstrations and ease of visualization of material presented”

     

    Unfortunately, there are also negative aspects to the use of mobile devices, from the design of apps to the lack of e-licenses for clinical papers and reprints. Another area where mobile technology seems to fall short is when it comes to reimbursement, patient assistance programs and managed care issues. We will discuss this in more depth in future blogs.

     

    What questions do you have?

    Read more >

    The World is Unprepared for Future Cybersecurity Attacks

    We have yet to experience, understand, and adapt to emerging types of cybersecurity attacks and resulting impacts.  Organizations place a heavy focus on the immediate efforts to prevent and when necessary respond to present-day assaults on their environments.  It is a marvelous firefight, where resources and attention are focused on the pressing problems at hand.  But cyber threats are constantly evolving and while controls and processes are being developed to address today’s threats, the world is largely oblivious to emerging types of attacks.  As a result, the public and private sectors are woefully unprepared for future types of incidents, which are far more severe than what we currently see.  We must expand our vision from today’s issues to better prepare for imminent cybersecurity challenges. 

     

    Defense in Depth v2.jpgProtecting against cyber attacks is an incredibly difficult job.  Threat agents maintain the initiative and decide who and how to attack.  Defenders must predict, prevent, detect, and respond to an active, resourceful, and intelligent opponents.  Most of the emphasis has been on prevention, detection, and response.  This places the focus on immediate problems and cleanup.  With the overwhelming number of vulnerabilities and barrage of attacks, this seems a reasonable allocation of resources.  Yet it is not sustainable.  With the rapid expansion of attack surfaces, the infusion of resources available to attackers, and the rise in complexity of the electronic ecosystem, attackers have ever greater opportunities to succeed.  Attacker’s capabilities are outpacing cyber defenses.  But there is a glimmer of hope as the industry is starting to recognize the need to also predict how the enemy will maneuver in the future.  EY’s 2014 Global Information Security Survey paints a picture where “Anticipating cyber attacks is the only way to be ahead of cyber criminals”

     

    Over the years, a number categories of cyber attacks have emerged.  Denial of Service (DOS) attacks for example, have been around since the beginning.  Anyone remember the Ping of Death back in 1997?  DOS types of attacks have evolved over the years leveraging many different tactics and resources.  Nowadays, attackers’ use of armies of bot’s to deliver a Distributed Denial-of-Service (DDOS) attack or poison network routing services.  Regardless of the approach, the same type of impact is experienced.   Security tactics and tools have also evolved over the years to develop fairly robust countermeasures.  For organizations willing to invest, they can largely mitigate the risks of denial-of-service attacks.

     

    But the story does not end there. 

     

    New attack categories have emerged, spurring a race to develop necessary tools and processes to interdict attacker innovation.  Akin to Dante’s Inferno, cybersecurity has a number of ever progressing tiers of pain and suffering related to modern computing.  Although we are witness to a stream of attack announcements every day in the news, we have only begun our decent. 
    Different types of impacts will emerge, necessitating new approaches and controls.  The evolution of attacks will continue to spiral downward, growing in scope.  Each level building upon the previous in a compounding way.  In order to prepare, we must first understand the four main archetypes of cyber based attacks, where we are in the cycle, and the spectrum of problems we will eventually face.

     

    Evolving Categories of Cybersecuirty Attacks 2.jpg

    Evolving Categories of Cybersecurity Attacks

    Level 1 – Denial of Service:

    • TYPE: An Availability (A) type of attack, of services, systems, customer access, operations, etc.
    • POPULARITY: Still the most popular type of attack, waged against web presence and in some cases computing operations infrastructures to bring down the availability of resources, presence, and engagement
    • PURPOSE: Still a popular method for expressing social discord, basic sabotage, and ransom/blackmail schemes
    • IMPACT: Results in inconvenience, operational delays, and perhaps embarrassment
    • HISTORY: The first category of attack developed as the Internet was formed.  Initially, methods focused on direct web defacement, system corruption, network interference and has subsequently evolved to use legions of robots ‘bots’ to overload websites with requests.  Blackmail started as ‘protection’ schemes targeting sites such as online gambling services, which did not want to be pushed offline.  More recently, malware has emerged which encrypts user’s files, only to be unlocked after a ransom is paid.  Same result, different trick
    • COMPLEXITY: Required entry skill and resource level of attackers is low.  No skills required as bot herders offer professional tools and services, some with 24×7 customer support, which can be purchased or rented by attackers.  For crimeware, more skill is needed but many tools and services are available as well
    • SECURITY: Industry security is competent as the impacts and methods are familiar.  Tools, processes, services, products, and protections exist which can be leveraged to protect and recover from the vast majority of these types of attacks

     

    Level 2 – Data Theft and Exposure:

    • TYPE: A Confidentiality (C) type of attack, exposing data and information to unauthorized parties.  Attackers target sources to obtain personal information, access credentials, acquire private or sensitive data, financial data for fraud, or materials to expose and embarrass others
    • POPULARITY:  As of recently, this is the most recognizable attack in the news, growing greatly in the past 2 years with large corporations and governments reeling from headline grabbing breaches.  More personal attacks, specific to private pictures have also captured the attention of the public.  Governments, businesses, and social sites are typical victims.  Notable incidents include WikiLeaks, Snowden, Target, eBay, Adobe, Home Depot, various celebrity nude-picture harvesting, and JP Morgan Chase
    • PURPOSE: Two primary motivations have emerged, financial gain and social awareness
    • IMPACT: Attacks result in financial loss (or increased risk thereof) and intense social discussion which may have effects on the governmental, social, and political landscapes
    • HISTORY: Confidential data has always been valuable to those who hold it, hence the measures to keep it from the general public.  Targeting information about people, accounts, and activity is age old, but with the advent of scalable technology handling ever more information, data breaches are becoming more commonplace and pose a larger impact
    • COMPLEXITY: Modest technical skill or access is required to breach a network, database, or exfiltration of large amounts of data.  Discrete services are for hire if you know where to inquire.  Much of the user and account data is posted on dark-nets for sale.  Other uses include using information for more targeted attacks and bringing to light covert activities for social discussion
    • SECURITY: The security industry is about half-way through the maturity cycle in figuring out a good set of defenses to protect confidential data.  But the attackers have had plenty of time to dig in and we will likely see a continued increase in breaches for the next couple of years.  Much more work is to be done, but this is the most visible battleground and organizations are committed to get the risks of this type of attack under reasonable control.  Such investment will fuel the development of better security technologies over time.  Currently, this is the big battlefield.

     

    Level 3 – Monitor and Manipulate:

    • TYPE: An Integrity (I) type of attack, seeking to gain sufficient access to not only copy information but to also tamper with data and transactions for the attackers benefit.  In most cases this requires long-term internal access and a deep understanding of processes
    • POPULARITY:  This is the next great category of attacks which have yet to materialize or at the very least, make the news in sufficient quantity.  There is a great value in being on the inside and watching who and how thing operate, then selectively alter data and operations.  This is not a quick one-time hit-and-run type of attack, rather it is a strategic maneuver against an adversary which can benefit the attacker in a number of ways over time.  We are seeing top echelon players such as nation states, organized criminals, and advanced threat groups effort complex campaigns for a persistent capability within target organizations
    • PURPOSE: When you become an ‘insider’ to a network, you can increase trust, conduct surveillance, and manipulate communications and transactions.  This might be employed as part of insider economic espionage, fraudulently tampering with financial transactions, undermining military defense structures, feeding misinformation to intelligence agencies, or causing a massive and cascading critical infrastructure outage.  Think what spy’s can do in traditional cloak-and-dagger situations.  This is pretty much the same.  Regardless of size, this is using the target’s electronic infrastructure against themselves
    • IMPACT: Potentially catastrophic on the long term geopolitical front, but likely will remain discrete in the short term.  Attacks against financial institutions will be severe, but this type of attack takes time, patience, and resources to pull off.  So the frequency will likely be sparse
    • HISTORY: Only hints have been seen by the public, limited to some cyberwarfare activities between feuding nations, advanced monitoring of social tools, and government sponsored surveillance and manipulation of communication infrastructures.  The future of this category is largely unwritten
    • COMPLEXITY: Attackers must be technically savvy and well-funded in most cases.  The mindset is also different from the other categories.  Threat agents must have patience, enduring commitment, durable resources, an understanding how the target works, vision to connect access with long term goals, and expertise in remaining stealthy over time is required
    • SECURITY: Off the shelf technology is nowhere close to being able to address this threat.  Most organizations are not even looking for this type of attack, as its appearance is largely passive.  At best, a lucky detection might lead to eventual eviction, but a dedicated attacker would be able to likely return after making adjustments.  The best defense is still paranoid people, well-funded to explore custom solutions, who have the right mindset (likely those who played such games before the Internet)

     

    Level 4 – Own and Obliterate:

    • TYPE: The triple threat of a Confidentiality, Integrity, and Availability (C/I/A) attack determined to destroy an organization or capability with no reasonable chance of recovery.  The goal is obliteration and permanent cessation
    • POPULARITY:  Not seen yet.  It is reasonable to suspect, well-funded programs in dark places are working on offensive cyberwarfare capabilities.  If ever such technology or tools becomes available to cybercriminals, they will be used for extortion and ransom on a global scale never before seen
    • PURPOSE: Cyber is the 5 domain of warfare.  Being able to destroy one’s opponent without their ability to recovery is checkmate in the cyber world  
    • IMPACT: Total.  The intent is clear.  Destroy all critical technology, undermine relationships and morale, deplete financial resources, sabotage services, and render all capability to recover or rebuild to a viable state null and void.  Burnt to the ground, ashes. “Abandon all hope…”, you get the idea
    • HISTORY: To be written, as this type of attack has not been seen unleashed as yet.  Before cyber, salting of fields and scorched earthy policies in war have attempted the same result
    • COMPLEXITY: Ultimate.  Modern compute systems are designed for resilience, redundancy, and recovery.  Realistically, such attacks on highly dynamic heterogeneous compute environments are difficult to orchestrate with any confidence.  Mature organizations have multiple communication paths, data backups, disaster recovery processes, business continuity planning, and knowledgeable people supporting the technology.  To succeed at this type of attack all these must be known, poisoned, undermined, or made irrelevant.  Administrative power and oversight is required.  The normal security controls must be bypassed and a destruction plan must take into account architecture, business operations, partnerships, social structures, legal agreements, and a myriad of other complexities.  Impossibly difficult, until someone actually figures it out.  Such attacks are custom and require regular updating to remain current
    • SECURITY: There is no holistic security for such a class of attack.  Cybersecurity is a piecemeal affair focused on reasonable, likely, and relevant events.  This is beyond.  It will be the endgame for some. 

     

    Today we are under attack and the threats are increasing.  We have successfully survived the first level of Denial-of-Service category of attacks, which are commonplace.  Security competency has reached a sufficient level to manage ongoing risks.  We are now in the struggle as part of the next level of attacks, Data Theft and Exposure and witnessing tremendous leakage of identity, private and confidential data, transactions, communications, and financial accounts.  The industry has yet to reach maturity in addressing the threats and managing the risks. 

     

    As we descend to lower levels, the challenges get tougher, legacy problems still remain and compound, and overall solutions become more complex.  Mitigation controls differ greatly and previous tools have little relevance to new categories of attack.  New security instruments must be developed and integrated.  Today’s pain and inconvenience will seem tepid compared to emerging categories. 

    Will we collectively be ready as Monitor and Manipulate attacks emerge?  I wager we will not as time is short and we are on the cusp of entering this realm.  But this is a learning game. 

     

    The world of security has a chance to improve, get smarter, predict and anticipate future threats, and prepare for the inevitable.  Defensive capabilities must accelerate to keep pace with attacker innovation.  Security must get smarter, take the initiative, and drive stronger technology which is more resistant to compromise.  People must also upgrade, behaving in more secure ways and better understanding risks.

    I for one have hope, but the window of opportunity is shrinking.  When we reach to bottom and see organizations be destroyed, it will be a Pandora’s Box which will have cascading effects across technology and society.  Nobody wants to look back and wonder why we did not see this coming and act.

     

    Twitter: @Matt_Rosenquist

    IT Peer Network: My Previous Posts

    LinkedIn: http://linkedin.com/in/matthewrosenquist

    Read more >