Recent Blog Posts

SC14: When to Launch an Internal HPC Cluster

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Eldon M. Walker, Ph.D., Director, Research Computing at Cleveland Clinic’s Lerner Research Institute. During SC14, Eldon will be sharing his thoughts on implementing a high performance computing cluster at the Intel booth (#1315) on Tuesday, Nov. 18, at 10:15 a.m. in the Intel Theater.


When data analyses grind to a halt due to insufficient processing capacity, scientists cannot be competitive. When we hit that wall at the Cleveland Clinic Lerner Research Institute, my team began consideration of the components of a solution, the cornerstone of which was a high performance computing (HPC) deployment.

 

In the past 20 years, the Cleveland Clinic Lerner Research Institute has progressed from a model of wet lab biomedical research that produced modest amounts of data to a scientific data acquisition and analysis environment that puts profound demands on information technology resources. This manifests as the need for the availability of two infrastructure components designed specifically to serve biomedical researchers operating on large amounts of unstructured data:

 

  1. A storage architecture capable of holding the data in a robust way
  2. Sufficient processing horsepower to enable the data analyses required by investigators

 

Deployment of these resources assumes the availability of:

 

  1. A data center capable of housing power and cooling hungry hardware
  2. Network resources capable of moving large amounts of data quickly

 

These components were available at the Cleveland Clinic in the form of a modern, tier 3 data center and ubiquitous 10 Gb / sec and 1 Gb / sec network service.

 

The storage problem was brought under control by way of 1.2 petabyte grid storage system in the data center that replicated to a second 1.2 petabyte system in the Lerner Research Institute server room facility. The ability to store and protect the data was the required first step in maintaining the fundamental capital (data) of our research enterprise.

 

It was equally clear to us that the type of analyses required to turn the data into scientific results had overrun the capacity of even high end desktop workstations and single unit servers of up to four processors. Analyses simply could not be run or would run too slowly to be practical. We had an immediate unmet need in several data processing scenarios:

 

  1. DNA Sequence analysis
    1. Whole genome sequence
      1. DNA methylation
    2. ChIP-seq data
      1. Protein – DNA interactions
    3. RNA-seq data
      1. Alternative RNA processing studies
  2. Finite Element Analysis
    1. Biomedical engineering modeling of the knee, ankle and shoulder
  3. Natural Language Processing
    1. Analysis of free text electronic health record notes

 

There was absolutely no question that an HPC cluster was the proper way to provide the necessary horsepower that would allow our investigators to be competitive in producing publishable, actionable scientific results. While a few processing needs could be met using offsite systems where we had collaborative arrangements, an internal resource was appropriate for several reasons:

 

  1. Some data analyses operated on huge datasets that were impractical to transport between locations.
  2. Some data must stay inside the security perimeter.
  3. Development of techniques and pipelines would depend on the help of outside systems administrators and change control processes that we found cumbersome; the sheer flexibility of an internal resource built with responsive industry partners was very compelling based on considerable experience attempting to leverage outside resources.
  4. Given that we had the data center, network and system administration resources, and given the modest price-point, commodity nature of much of the HPC hardware (as revealed by our due diligence process), the economics of obtaining an HPC cluster were practical.

 

Given the realities we faced and after a period of consultation with vendors, we embarked on a system design in collaboration with Dell and Intel. The definitive proof of concept derived from the initial roll out of our HPC solution is that we can run analyses that were impractical or impossible previously.

 

What questions do you have? Are you at the point of considering an internal HPC cluster?

Read more >

SC14: The Bleeding Edge of Medicine

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Karl D’Souza, senior user experience specialist at Dassault Systèmes Simulia Corp. Karl will be speaking about the Living Heart Project noted below during SC14 at the Intel booth (#1315) on Wednesday, Nov. 19, at 12:15 p.m. in the Intel Theater and 1 p.m. in the Intel Community Hub.


Computer Aided Engineering (CAE) has become pervasive in the design and manufacture of everything from jumbo jets to razor blades, transforming the product development process to produce more efficient, cost effective, safe and easy to use products. A central component of CAE is the ability to realistically simulate the physical behavior of a product in real world scenarios, which greatly facilitates understanding and innovation. karl2.jpg

 

Application of this advanced technology to healthcare has profound implications for society, promising to transform the practice of medicine from observation driven to understanding driven. However, lack of definitive models, processes and standards has limited its application, and development has remained fragmented in research organizations around the world.

 

In January of 2014, Dassault Systèmes took the first step to change this and launched the “Living Heart Project” as a translational initiative to partner with cardiologists, researchers, and device manufacturers to develop a definitive realistic simulation of the human heart. Through this accelerated approach, the first commercial model-centric, application-agnostic, multi-physical whole heart simulation has been produced.

 

Since cardiovascular disease is the number one cause of morbidity and mortality across the globe, Dassault Systèmes saw the Living Heart Project as the best way to address the problem. Although there is a plethora of medical devices, drugs, and interventions, physicians face the problem of determining which device, drug, or intervention to use on which patient. Often times to truly understand what is going on inside a patient invasive procedures are needed.

 

CAE and the Living Heart Project will enable cardiologists to take an image (MRI, CT, etc) of a patient’s heart and reconstruct it on a 3D model thereby creating a much more personalized form of healthcare. The doctor can see exactly what is happening in the patient’s heart and definitively make a more informed decision of how to treat that patient most effectively.

 

If you will be at SC14 next week, I invite you to join me when I present an overview of the project, the model, results, and implications for personalized healthcare. Come by the Intel booth (1315) on Wednesday, Nov. 19, for a presentation at 12:15 p.m. in the Intel Theater immediately followed by a Community Hub discussion at 1 p.m.

 

What questions do you have about computer aided engineering?

Read more >

Meet Me @ “The Hub” – Discover, Collaborate, and Accelerate at SC14

“The Hub” is a social invention created by Intel for you to socialize and collaborate on ideas that can help drive your discoveries faster. It is located in the Intel booth (1315) at SC14.

Parallelization and vectorization are just not that easy. It is even harder if you try to do it alone. At “The Hub,” you will have the opportunity to listen, learn and share what you and your peers have experienced. The goal is to help you create, improve or expand your social network with respect to peers engaged in similar optimization and algorithm development. Intel will be providing discussion leaders to get the conversation started on various topics including OpenMP, MKL, HPC, Lustre, and fabrics. We will also be holding discussions on the intersection of HPC and specific vertical segments (life sciences, oil and gas, etc.), as well as holding special events in the collaboration hub, including a book signing with James Reinders and Jim Jeffers and a discussion on women in the science and technology.

If you’re heading to the show, stop by and “Meet Us @ The Hub” in booth 1315 for a challenging and intellectual opportunity to talk with your peers about parallelization, vectorization and optimization.

To see the full list of Hub activities, times and topics, check out our schedule.

Read more >

Bringing the Yin and Yang to Supercomputing

To complete the representation of teams from across the globe, Professor An Hong of the University of Science and Technology of China (USTC) has put together a team comprised of Masters and PhD students and professors from four of China’s prestigious universities. We caught up with her in the midst of preparing for HPC China14, PAC14 Parallel challenges, and the Student Cluster Competition at SC14.

Q1: The growth of the supercomputing industry in China is certainly obvious from the Tianhe-2 and Tianhe-1 supercomputers which are currently ranked #1 and #14 on the Top 500 supercomputer list (http://www.top500.org/list/2014/06/). What impact has this had on computer science at USTC?
A1: With the growth of the supercomputing industry in China, it is apparent that computer science education and research, not only in USTC but also in other universities in China, can be furnished with abundant supercomputing resources. But, it is not to imply that the HPC education and research have achieved to top level. Ranked#1 on the TOP500 supercomputer list may be more money sense than technology progress. So, as professors in computer science in developing China, we should let the students know the value of higher computer science education to be pursued.

 

Q2: What is the meaning or significance of the team name you have chosen (“Taiji”)?
A2: In Chinese philosophy, yin & yang, which are often shortened to “yin-yang” or “yin yang”, are concepts used to describe how apparently opposite or contrary forces are actually complementary, interconnected and interdependent in the natural world, and how they give rise to each other as they interrelate to one another.
Taiji (☯) comes about through the balance of yin and yang. Their complementary forces that interact to form a dynamic system, is the same as the transformation and combination with each other between 0 and 1 that form a computer system.

 

Taiji’s symbol is composed of yin (black) and yang (white), “black (yin)” can denote “0”, “white (yang)” can denote “1”, then all things in the world can derive from “0” and “1”. The Taiji’s philosophy coming from ancient China about seven thousand years ago, may inspire us how to build a balanced computer system from bits to gates and high level beyond.

 

Q3: What are the names and titles of the other team members who will be participating in the PUCC?
A3:

 

Name

Title

Organization

Activity participation

AN Hong

Professor

University of Science and Technology of China

Team Captain,

the code optimization challenge

Liang Weihao

Master Student

University of Science and Technology of China

the code optimization challenge

Chen Junshi

PhD Student

University of Science and Technology of China

the code optimization challenge

Li Feng

PhD Student

University of Science and Technology of China

the code optimization challenge

Shi Xuanhua

Professor

Huazhong University of Science and Technology

the trivia challenge

Jin Hai

Professor

Huazhong University of Science and Technology

the trivia challenge

Lin Xinhua

Professor

Shanghai Jiao Tong University

the trivia challenge

Liang Yun

Professor

Peking University

the trivia challenge

 

Q4: Why is participating in the PUCC important to USTC?
A4: Most team’s members come from USTC. USTC’s mission has been to “focus on frontier areas of science and technology and educate top leaders in science and technology for China and the world”. Central to its strategy has been the combination of education and research, as well as the emphasis on quality rather than quantity. Led by the most renowned Chinese scientists of the time, USTC set up a series of programs creatively encompassing frontier research and development of new technology.

 

The PUCC is a great, interesting activity, not only giving us the opportunity to showcase Intel’s technology innovation but also to give expression to the attention for Chinese Children and Youth Science endeavors. So, we appreciate the value of the PUCC to our professors and students.

 

Q5: The Supercomputing conference is using the theme “HPC Matters”. Can you tell me why you think HPC matters?
A5: All the HPCers come together to solve some of the critical problems in the world that matter to everyone in the world.

 

Q6: How will your team prepare for the PUCC?
A6: We have just participated the Parallel Application Challenge 2014 (PAC2014) held in HPC China14, in Nov.6-8, Guangzhou, China, which is organized by China Computer Federation Technical Committee of HPC and Intel. This competition required teams to optimize a parallel application provided by the organizers on the Intel Xeon and Xeon Phi computing platforms. Through the competition, we deeply understand much of Intel’s innovation in manycore and multicore technology.

 

For more on the Intel Parallel Universe Computing Challenge, visit the home page.

Read more >

Don’t Miss HPC Luminary, Jack Dongarra, at Intel’s SC14 Booth in New Orleans

If you plan to attend SC14 in New Orleans, you’re in for a treat.

 

Jack Dongarra, one of the HPC community’s iconic speakers, will be holding forth at the Intel booth (#1315), Monday, November 17 at 7:10 pm, on one of his current favorite topics – MAGMA (Matrix Algebra on GPU and Multicore Architectures). His talk is titled, “The MAGMA Project: Numerical Linear Algebra for Accelerators.”

 

To say Jack has the chops to speak on the subject is a major understatement. In addition to being an enthusiastic, informative and entertaining speaker, Jack is a Distinguished Professor of Computer Science at the University of Tennessee (UTK) as well as the director of UTK’s Innovative Computing Laboratory (ICL). And that’s just for openers.

 

You can check out his extensive affiliations and contributions to the design of open source software packages and systems on his LinkedIn page. He has worked on everything from LINPACK, BLAS and MPI to the latest projects that he and his team are developing at ICL, which include PaRSEC, PLASMA, and, of course, MAGMA.

 

They all fall within his rather broad specialty area which includes numerical algorithms in linear algebra, parallel computing, the use of advanced computer architecture, programming methodology, and tools for parallel computers.

 

Jack was part of the team that developed LAPACK and ScaLAPACK. This same team is responsible for designing and implementing the collection of next generation linear algebra libraries that make up MAGMA.

Designed for heterogeneous coprocessor and GPU-based architectures, MAGMA re-implements the functions of LAPACK and the BLAS optimized for the hybrid platform. This allows computational scientists to effortlessly port any software components that rely on linear algebra.

 

One of the main payoffs for using MAGMA is that it allows you to enable applications that fully exploit the power of heterogeneous systems composed of multicore and many-core CPUs and coprocessors. MAGMA allows you to leverage today’s advanced computational capabilities to realize the fastest possible time to an accurate solution within given energy constraints.


A New Intel Parallel Computing Center

In September of this year, UTK’s Innovative Computing Laboratory (ICL) became the newest Intel Parallel Computing Center (IPCC). The objective of the ICL IPCC is the development and optimization of numerical linear algebra libraries and technologies for applications, while tackling current challenges in heterogeneous Intel Xeon Phi coprocessor-based high performance computing.

 

In collaboration with Intel’s MKL team, the IPCC at ICL will modernize the popular LAPACK and ScaLAPACK libraries to run efficiently on current and future manycore architectures, and will disseminate the developments through the open source MAGMA MIC library.

 

Beating the Bottlenecks

This is good news for the members of the HPC community who will shortly be gathering in force in New Orleans. As Dongarra will undoubtedly point out during his presentation, by combining the strengths of different architectures, MAGMA overcomes bottlenecks associated with just multicore or just GPUs, to significantly outperform corresponding packages for any of these homogeneous components taken separately.

 

By combining the strengths of the different architectures, MAGMA significantly outperforms corresponding packages for any of these homogenous components taken separately. MAGMA’s one-sided factorization outperforms state-of-the-art CPU libraries on high-end multi-socket, multicore nodes – for example, using up to 48 modern cores. The benefits for two-sided factorizations (the bases for eigenvalue problem and SVD solvers) are even greater: performance can exceed 10X the performance of systems with 48 modern CPU cores.

 

Last September, ICL also announced that MAGMA MIC 1.2 is now available. This release provides implementations for MAGMA’s one-sided (LU, QR, and Cholesky) and two-sided (Hessenberg, bi- and tridiagonal reductions) dense matrix factorizations, as well as linear and eigenproblem solver for Intel Xeon Phi coprocessors.

 

MAGMA has been developed in C for multi/manycore systems enhanced with coprocessors. Within a CPU node, MAGMA uses pthreads and MPI for inter-nodal communications. Included is the development of a MAGMA port in OpenCL, as well as a pragma-based port suitable for Intel MIC-based architectures.

 

Funding for the development is being provided by DOE and NSF, as well as by industry including Intel, NVIDIA, AMD, MathWorks and Microsoft Research.

 

We can’t second guess what Jack will have to say, but based on the overview on the MAGMA web site, certain themes are likely to surface.

 

For example, it’s evident that the design of future microprocessors and large HPC systems will be heterogeneous and hybrid in nature. They will rely on the integration of many-core CPU technology and special purpose hardware and accelerators and coprocessors like the Intel Xeon Phi coprocessor. And we’re not just talking about high end machines here – everything from laptops to supercomputers and massive clusters will be composed of a composite of heterogeneous components.

 

Jack and his crew at the University of Tennessee are playing a major role in making that happen.

 

So stop by the Intel booth (#1315) at 7:10 pm on Monday, November 17, and let Jack Dongarra weave you a tale of how HPC is moving into overdrive with the help of advanced aids like MAGMA.

 

You’ll be hearing HPC history in the making.


For a schedule of all Intel Collaboration Hub and Theater Presentations at SC14, visit this blog.

Read more >

Blueprint: SDN’s Impact on Data Center Power/Cooling Costs

This article originally appeared on Converge Digests Monday, October 13, 2014

 

intel-cloud-graphic.PNGThe growing interest in software-defined networking (SDN) is understandable. Compared to traditional static networking approaches, the inherent flexibility of SDN compliments highly virtualized systems and environments that can expand or contract in an efficient business oriented way. That said, flexibility is not the main driver behind SDN adoption. Early adopters and industry watchers cite cost as a primary motivation.

 

 

 

SDN certainly offers great potential for simplifying network configuration and management, and raising the overall level of automation. However, SDN will also introduce profound changes to the data center. Reconfiguring networks on the fly introduces fluid conditions within the data center.

 

 

How will the more dynamic infrastructures impact critical data center resources – power and cooling?

 

In the past, 20 to 40 percent of data center resources were typically idle at any given time and yet still drawing power and dissipating heat. As energy costs have risen over the years, data centers have had to pay more attention to this waste and look for ways to keep the utility bills within budget. For example, many data centers have bumped up the thermostat to save on cooling costs.

 

 

These types of easy fixes, however, quickly fall short in the data centers associated with highly dynamic infrastructures. As network configurations change, so do the workloads on the servers, and network optimization must therefore take into consideration the data center impact.

 

 

Modern energy management solutions equip data center managers to solve this problem. They make it possible to see the big picture for energy use in the data center, even in environments that are continuously changing.  Holistic in nature, the best-in-class solutions automate the real-time gathering of power levels throughout the data center as well as server inlet temperatures for fine-grained visibility of both energy and temperature. This information is provided by today’s data center equipment, and the energy management solutions make it possible to turn this information into cost-effective management practices.

 

 

The energy management solutions can also give IT intuitive, graphical views of both real-time and historical data. The visual maps make it easy to identify and understand the thermal zones and energy usage patterns for a row or group of racks within one or multiple data center sites.

 

 

Collecting and analyzing this information makes it possible to evolve very proactive practices for data center and infrastructure management. For example, hot spots can be identified early, before they damage equipment or disrupt services. Logged data can be used to optimize rack configurations and server provisioning in response to network changes or for capacity planning.

 

 

Some of the same solutions that automate monitoring can also introduce control features. Server power capping can be introduced to ensure that any workload shifts do not result in harmful power spikes. Power thresholds make it possible to identify and adjust conditions to extend the life of the infrastructure.

 

 

To control server performance and quality of service, advanced energy management solutions also make it possible to balance power and server processor operating frequencies. The combination of power capping and frequency adjustments gives data center managers the ability to intelligently control and automate the allocation of server assets within a dynamic environment.

 

 

Early deployments are validating the potential for SDN, but data center managers should take time to consider the indirect and direct impacts of this or any disruptive technology so that expectations can be set accordingly. SDN is just one trend that puts more pressure on IT to be able to do more with less.

 

 

Management expects to see costs go down; users expect to see 100% uptime for the services they need to do their jobs. More than ever, IT needs the right tools to oversee the resources they are being asked to deploy and configure more rapidly. They need to know the impacts of any change on the resource allocations within the data center.

 

 

IT teams planning for SDN must also consider the increasing regulations and availability restrictions relating to energy in various locations and regions. Some utility companies are already unable to meet the service levels required by some data centers, regardless of price. Over-provisioning can no longer be considered a practical safety net for new deployments.

 

 

Regular evaluations of the energy situation in the data center should be a standard practice for technology planning. Holistic energy management solutions give data center managers many affordable tools for those efforts. Today’s challenge is to accurately assess technology trends before any pilot testing begins, and leverage an energy management solution that can minimize the pain points of any new technology project such as SDN.

Read more >

Bringing Conflict-Free Technology to the Enterprise



In January 2014, Intel accomplished its goal to manufacture microprocessors that are DRC conflict free for tantalum, tin, tungsten, and gold.

 

The journey towards reimagining the supply chain is long and arduous; it’s a large-scale, long-term commitment that demands precise strategy. For us, it was an extensive five-year plan of collecting and analyzing data, building an overarching business goal, educating and empowering supply chain partners, and implementing changes guaranteed to add business value for years to come. But we committed ourselves to these efforts because of global impact and responsibility.  As a result, the rewards have outweighed the work by leaps and bounds.

 

Cutting Ties with Conflict Minerals

 

The Democratic Republic of Congo (DRC) is the epicenter of one of the most brutal wars of our time; since 1998, 5.4 million lives have been lost to the ongoing conflict, 50 percent of which were five-years old or younger. The economy of the DRC relies heavily on the mining sector, while the rest of the world relies heavily on the DRC’s diamonds, cobalt ore, and copper. The stark reality is that the war in the Eastern Congo has been fueled by the smuggling of coltan and cassiterite (ores of tantalum and tin, respectively). Meaning most of the electronic devices we interact with on a daily basis are likely powered by conflict minerals.

 

One of the main reasons most are dissuaded from pursuing an initiative of this scope is that the supply chain represents one of the most decentralized units in the business. Demanding accountability from a complex system is a sizeable endeavor. Intel represents one of the first enterprise tech companies to pursue conflict-free materials, but the movement is starting to gain traction in the greater tech community as customers demand more corporate transparency.

 

Getting the Enterprise Behind Fair Tech

 

For Bas van Abel, CEO of Fairphone, there’s already a sizeable consumer demand for fair technology, but there remains a distinct need to prove that a market for fair technology exists. Fairphone is a smartphone featuring an open design built with conflict-free minerals. The company also boasts fair wages and labor practices for the supply chain workforce. When Abel crowd-funded the first prototype, his goal was to pre-sell 5,000 phones; within three weeks, he had sold 10,000. It’s only a matter of time before the awareness gains foothold and the general public starts demanding conflict-free minerals.

 

Screen Shot 2014-11-07 at 10.28.24 AM.png

We chose to bring the conflict-free initiative to our supply chain because funding armed groups in the DRC was no longer an option. Our hope is that other enterprises will follow suit in analyzing their own supply chains. If you want to learn more about how we embraced innovation by examining our own corporate responsibility and redefining how we build our products, you can read the full brief here.

 

To continue the conversation, please follow us at @IntelITCenter or use #ITCenter.

Read more >

SC14: Life Sciences Research Not Just for Workstations Anymore

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Ari E. Berman, Ph.D., Director of Government Services and Principal Investigator at BioTeam, Inc. Ari will be sharing his thoughts on high performance infrastructure and high speed data transfer during SC14 at the Intel booth (#1315) on Wednesday, Nov. 19, at 2 p.m. in the Intel Community Hub and at 3 p.m. in the Intel Theater.


There is a ton of hype these days about Big Data, both in what the term actually means, and what the implications are for reaching the point of discovery in all that data.

 

The biggest issue right now is the computational infrastructure needed to get to that mythical Big Data discovery place everyone talks about. Personally, I hate the term Big Data. The term “big” is very subjective and in the eye of the beholder. It might mean 3PB (petabytes) of data to one person, or 10GB (gigabytes) to someone else. ariheadshot52014-sized.jpg

 

From my perspective, the thing that everyone is really talking about with Big Data is the ability to take the sum total of data that’s out there for any particular subject, pool it together, and perform a meta-analysis on it to more accurately create a model that can lead to some cool discovery that could change the way we understand some topic. Those meta-analyses are truly difficult and, when you’re talking about petascale data, require serious amounts of computational infrastructure that is tuned and optimized (also known as converged) for your data workflows. Without properly converged infrastructure, most people will spend all of their time just figuring out how to store and process the data, without ever reaching any conclusions.

 

Which brings us to life sciences. Until recently, life sciences and biomedical research could really be done using Excel and simple computational algorithms. Laboratory instrumentation really didn’t create that much data at a time, and it could be managed with simple, desktop-class computers and everyday computational methods. Sure, the occasional group was able to create enough data that required some mathematical modeling or advanced statistical analysis or even some HPC, and molecular simulations have always required a lot of computational power. But, in the last decade or so, the pace of advancement of laboratory equipment has left large swath of overwhelmed biomedical research scientists in the wake of the amount of data being produced.

 

The decreased cost and increased speed of laboratory equipment, such as next-generation sequencers (NGS) and high-throughput high-resolution imaging systems, has forced researchers to become very computationally savvy very quickly. It now takes rather sophisticated HPC resources, parallel storage systems, and ultra-high speed networks to process the analytics workflows in life sciences. And, to complicate matters, these newer laboratory techniques are paving the way towards the realization of personalized medicine, which carries the same computational burden combined with the tight and highly subjective federal restrictions surrounding the privacy of personal health information (PHI).  Overcoming these challenges has been difficult, but very innovative organizations have begun to do just that.

 

I thought it might be useful to very briefly discuss the three major trends we see having a positive effect on life sciences research:

 

1. Science DMZs: There is a rather new movement towards the implementation of specialized research-only networks that prioritize fast and efficient data flow over security (while still maintaining security), also known as the Science DMZ model (http://fasterdata.es.net). These implementations are making it easier for scientists to get around tight enterprise networking restrictions without blowing the security policies of their organizations so that scientists can move their data effectively without ******* off their compliance officers.


2. Hybrid Compute/Storage Models: There is a huge push to move towards cloud-based infrastructure, but organizations are realizing that too much persistent cloud infrastructure can be more costly in the long term than local compute. The answer is the implementation of small local compute infrastructures to handle the really hard problems and the persistent services, hybridized with public cloud infrastructures that are orchestrated to be automatically brought up when needed, and torn down when not needed; all managed by a software layer that sits in front of the backend systems. This model looks promising as the most cost-effective and flexible method that balances local hardware life-cycle issues with support personnel, as well as the dynamic needs of scientists.


3. Commodity HPC/Storage: The biggest trend in life sciences research is the push towards the use of low-cost, commodity, white box infrastructures for research needs. Life sciences has not reached the sophistication level that requires true capability supercomputing (for the most part), thus, well-engineered capacity systems built from white-box vendors provide very effective computational and storage platforms for scientists to use for their research. This approach carries a higher support burden for the organization because many of the systems don’t come pre-built or supported overall, and thus require in-house expertise that can be hard to find and expensive to retain. But, the cost balance of the support vs. the lifecycle management is worth it to most organizations.

 

Biomedical scientific research is the latest in the string of scientific disciplines that require very creative solutions to their data generation problems. We are at the stage now where most researchers spend a lot of their time just trying to figure out what to do with their data in the first place, rather than getting answers. However, I feel that the field is at an inflection point where discovery will start pouring out as the availability of very powerful commodity systems and reference architectures come to bear on the market. The key for life sciences HPC is the balance between effectiveness and affordability due to a significant lack of funding in the space right now, which is likely to get worse before it gets better. But, scientists are resourceful and persistent; they will usually find a way to discover because they are driven to improve the quality of life for humankind and to make personalized medicine a reality in the 21st century.

 

What questions about HPC do you have?

Read more >

Empowering Field Workers Through Mobility

The unique — often adverse — working conditions facing utility field workers require unique mobility solutions. Not only do workers in these roles spend the majority of their time on the road, their work often takes them to places where the weather and terrain is less than hospitable. Despite all of the challenges facing this large mobile workforce, new tablets and other mobile devices are increasing productivity and reducing downtime for workers.

 

Field workers need a device that supports them whether they’re on the road or in the office. A recent RCR Wireless guest blogger summed up the needs of utility field workers by comparing them to the “front lines” of an organization:

 

Q4-SSG-Image-2-1 (1).png

Field workers are at the front lines of customer service … and therefore need to be empowered to better serve customers. They require mobile applications that offer easier access to information that resides in corporate data centers.

 

Previously, this “easy access” to data centers was limited to service center offices and some mobile applications and devices. Now, however, advances in tablet technology enable workers to take a mobile office with them everywhere they go.


Tough Tablets for Tough Jobs


With numerous tablets and mobile PCs on the market, it’s difficult to determine which mobile solution provides the best experience for the unique working conditions of field workers. In order to move through their work, these users need a device that combines durability, connectivity, security, and speed.

 

Principled Technologies recently reviewed an Apple iPad Air, Samsung Galaxy Tab Pro 12.2, and a Motion Computing R12 to determine which tablet yields the most benefits for utility field workers. After comparing performance among the devices with regards to common scenarios field workers face on a daily basis, one tablet emerged as a clear favorite for this workforce.

 

While the iPad and Galaxy feature thin profiles and sleek frames, the Intel-powered Motion Computing R12 received an MIL-STD-801G impact resistance rating from the U.S. Military as well as international accreditation (IP-54 rating) for dust and water resistance. The device also hit the mark with its biometric security features and hot-swappable 8-hour battery.

 

Communication between utility workers and dispatching offices is often the key to a successful work day. Among the three tablets, the Motion Computing R12 was the only device able to handle a Skype call and open and edit an Excel document simultaneously. This kind of multi-tasking ability works seamlessly on this tablet because it runs Microsoft Windows 8.1 natively on a fast Intel processor and also boasts 8 GB of RAM (compared to 1 GB in the iPad and 3 GB in the Galaxy).

 

At the end of the day, having the right device can lead to more work orders completed and better working conditions for field workers. Empowering field workers with the right tools can remove many of the technical hurdles that they face and lead to increases in productivity and reduced inefficiencies.

 

To learn more about the Motion Computing R12, click here.

Read more >