Recent Blog Posts

On the Ground at SC14: Fellow Traveler Companies

Let’s talk about Fellow travelers at SC14 – companies that Intel is committed to collaborating with in the HPC community. In addition to the end-user demos in the corporate booth, Intel took the opportunity to highlight a few more companies in the channel booth and on the Fellow Traveler tour.

 

Intel is hosting three different Fellow Traveler tours on Discovery, Innovation, and Vision. A tour guide leads a small group of SC14 attendees through the show floor to visit eight company booths (with a few call outs to additional Fellow Travelers along the way). Yes, you wear an audio headset to hear your tour guide. And yes, you follow a flag around the show floor. On our 30 minute journey around the floor, my Discovery tour visited (official stops are bolded):

  • Supermicro: Green/power efficient supercomputer installation at the San Diego Supercomputer Center
  • Cycle Computing: Simple and secure cloud HPC solutions
  • ACE Computers: ACE builds customized HPC solutions, and customers include scientific research/national labs/large enterprises. The company’s systems handle everything from chemistry to auto racing and are powered by the Intel Xeon processor E5 v3. Fun fact, the company’s CEO is working on the next EPEAT standard for servers.
  • Kitware: ParaView (co-developed by Los Alamos National Laboratory) is an open-source, multi-platform, extensible application designed for visualizing large data sets.
  • NAG: A non-profit working on numerical analysis theory, they also take on private customers and have worked with Intel for decades on tuning algorithms for modern architecture. NAG’s code library is an industry standard.
  • Colfax: Offering training for parallel programming (over 1,000 trained so far).
  • Iceotope: Liquid cooling experts, their solutions offer better performance/watt than liquid and air cooling hybrid.
  • Huawei: Offering servers, clusters (they’re Intel Cluster Ready certified) and Xeon Phi coprocessor solutions.
  • Obsidian Strategics: Showcasing a high-density Lustre installation.
  • AEON: Offering fast and tailored Lustre storage solutions in a variety of industries including research, scientific computing and entertainment; they are currently architecting a Lustre storage system for the San Diego Supercomputer Center.
  • NetApp: Their booth highlighted NetApp’s storage and data management solutions. A current real-world deployment includes 55PB of NetApp E-Series storage that provides over 1TB/sec to a Lustre file system.
  • Rave Computer: The company showcased the RT1251 flagship workstation, featuring dual Intel Xeon processor E5-2600 series with up to 36 cores and up to 90MB of combined cache. It can also make use of the Intel Xeon Phi co-processor for 3D modeling, visualization, simulation, CAD, CFD, numerical analytics, computational chemistry, computational finance, and digital content creation.
  • RAID Inc: Demo included a SAN for use in big data, running the Intel Enterprise Edition of Lustre with OpenZFS support. RAID’s systems accelerate time to results while lowering costs.
  • SGI: Showcased the SGI ICE X supercomputer, the sixth generation in the product line and the most powerful distributed memory system on the market today. It is powered by the Intel Xeon processor E5 v3 and includes warm water cooling technology.
  • NCAR: Is answering the question, how do you refactor an entire climate code. NCAR, in collaboration with the University of Colorado at Boulder is an Intel Parallel Computer Center aiming to develop tools and knowledge to help with the performance improvements of CESM, WRF, and MPAS on Intel Xeon and Intel Xeon Phi processors.


Intel Booth – Fellow Traveler Tours depart from the front right counter

 

After turning in my headset, I decided to check out the Intel Channel Pavilion next to Intel’s corporate booth. The Channel Pavilion has multiple kiosks (so many that they switched halfway through the show), each showcasing a demo with Intel Xeon and/or Xeon Phi processors, and highlighting a number of products and technologies. Here’s a quick rundown:

  • Aberdeen: Custom servers and storage featuring Intel Xeon processors
  • Acme Micro: Solutions utilizing the Intel Xeon processor and Intel SSD PCIe cards
  • Advanced Clustering Technologies: Clustered solutions in 2U of space
  • AIC: Alternative storage hierarchy to achieve high bandwidth and low latency via Intel Xeon processors
  • AMAX: Many core HPC solutions featuring Intel Xeon processor E5-2600 v3 and Intel Xeon Phi coprocessors
  • ASA Computers: Einstein@Home uses an Intel Xeon processor based server to search for weak astrophysical signals from spinning neutron stars
  • Atipa Technologies: Featuring servers, clustering solutions, workstations and parallel storage
  • Ciara: The Orion HF 620-G3 featuring the Intel Xeon processor E5-2600 v3
  • Colfax: Colfax Developer Training on efficient parallel programming for Xeon Phi coprocessors
  • Exxact Corporation: Accelerating simulation code up to 3X with custom Intel Xeon Phi coprocessor solutions
  • Koi Computers: Ultra Enterprise Class servers with the Intel Xeon processor E5-2600 v3 and a wide range of networking options
  • Nor-Tech: Featuring a range of HPC clusters/configurations and integrated with Intel, ANSYS, Dassault, Simula, NICE and Altair
  • One Stop Systems: The OSS 3U high density compute accelerator can utilize up to 16 Intel Xeon Phi coprocessors and connect to 1-4 servers

 

The Intel Channel Pavilion

 

Once completing the booth tours, I decided to head back to the Intel Parallel Computing Theater to listen to a few more presentations on how companies and organizations are putting these systems into action.

 

Joseph Lombardo, from the National Supercomputing Center for Energy and the Environment stopped by the theater to talk about the new data center they’ve recently put into action, as well as their use of a data center from Switch Communications. The NSCEE has a couple of challenges – massive computing needs (storage and compute power); time sensitive projects (those with governmental and environmental significance) and numerous and complex workloads. In their Alzheimer’s research, the NSCEE compares the genomes of Alzheimer’s patients with those of normal genomes. They worked with Altair and Intel on a system that reduces their runtime from 8 hours to 3 hours, while improving system manageability and extensibility.

 

Joseph Lombardo from the NSCEE

 

Then I listed in to Michael Klemm from Intel talking about offloading Python to the Intel Xeon Phi coprocessor. Python is a quick and high productivity language (packages include: iPython, Numpy/SciPy, and Pandas) that can help compose scientific applications. Michael talked through design principles for the pyMIC offload infrastructure: Simple usage, slim API, fast code and keep control in a programmer’s hand.

 

Michael Klemm from Intel

 

Wolfgang Gentzsch from UberCloud covered HPC for the Masses via cloud computing. Currently more than 90% of an engineer or scientist’s in-house HPC is completed via workstations and 5% via servers. Less than 1% is completed using HPC Clouds, which offers a ripe opportunity if challenges like security/privacy/trust, control of data (where and how is your data running), software licensing, and the transfer of heavy data can be resolved. There are some hefty benefits – pay per use, easily scaling resources up or down, low risk with a specific cloud provider – that may start to entice more users shortly. UberCloud has 19 providers and 50 products currently in their marketplace.

 

Wolfgang Gentzsch from UberCloud

 

The Large Hadron Collider is probably tops on my list of places to see before I die, so I was excited to see Niko Neufeld from LHCb CERN talk about their data acquisition/storage challenge. I know, yet another big data problem. But the LHC generates one petabyte of data EVERY DAY. Nikko talked through how they’re able to use some sophisticated filtering (via ASICS and FPGA) to get that down to storing 30PB a year, but that’s still an enormous challenge. The team at CERN is interested in looking at the Intel OmniPath Architecture to help them move data faster, and then integrating Intel Xeon + FPGA with Intel Xeon and Intel Xeon Phi processors to help them shave off the amount of data stored even more.

 

Niko Neufeld from LHCb CERN

 

And finally, the PUCC held matches 4 and 5 today, the last of the initial matches and the first of the playoffs. In the last regular match, Taji took on the Examen and, in a stunning last-second “make” run, the Examen took it by a score of 4763 to 2900. In the afternoon match, the Brilliant Dummies took on the Gaussian Elimination Squad (defending champs). It was a hard fought battle – for many of the questions both teams had answered before the multiple choice possibilities were shown to the audience. In the end, the Brilliant Dummies were able to eliminate the defending champions by a score of 5082 to 2082. Congratulations to the Brilliant Dummies, we’ll see you in the final on Thursday.

 

We’ll see the Brilliant Dummies in the PUCC finals on Thursday

Read more >

The Final Day for the 2014 Parallel Universe Computing Challenge @ SC14

Thursday, November 20, 2014

Dateline:  New Orleans, LA, USA

 

This morning at 11:00AM (Central time, New Orleans, LA), the second semi-final match of the 2014 Parallel Universe Computing Challenge will take place at the Intel Parallel Theater (Booth 1315) as the Coding Illini team from NCSA and UIUC, faces off against the EXAMEN from Europe.   Coding Illini earned its spot in is semi-final match by beating the team from Latin America (SC3), and the EXAMEN earned their semi-final slot by beating team Taiji from China.

 

The winner of this morning’s semi-final match will go on to play the Brilliant Dummies from Korea in the final competition match this afternoon at 1:30PM, live on stage from Intel’s Parallel Universe Theater.

 

The teams are playing for the grand prize of $26,000 to be donated to a charitable organization of their choice.

 

Don’t miss the excitement:

  • Match #5 is scheduled at 11:00AM
  • The Final Match is scheduled at 1:30PM

 

Packed crowd watching the PUCC

Read more >

5 Questions for Dr. Sandhya Pruthi, Medical Director for Patient Experience, Breast Diagnostic Clinic, Mayo Clinic Rochester

Clinicians are on the front lines when it comes to using healthcare technology. To get a doctor’s perspective on health IT, we caught up with Dr. Sandhya Pruthi, medical director for patient experience, breast diagnostic clinic, at Mayo Clinic Rochester, for her thoughts on telemedicine and the work she has been undertaking with remote patients in Alaska.

 

sandhya-pruthi-11254262.jpg

 

Intel: How are you involved in virtual care?

 

Pruthi: I have a very personal interest in virtual care. I have been providing telemedicine care to women in Anchorage, Alaska, right here from my telemedicine clinic in Rochester, Minnesota. I have referrals from providers in Anchorage who ask me to meet their patients using virtual telemedicine. We call it our virtual breast clinic, and we’ve been offering the service twice a month for the past three years.

 

Intel: What services do you provide through telemedicine?

 

Pruthi: We know that in some remote parts of the country, it’s hard to get access to experts. What I’ve been able to provide remotely is medical counseling for women who are considered high risk for breast cancer. I remotely counsel them on breast cancer prevention and answer questions about genetic testing for breast cancer when there is a very strong family history. The beauty is that I get to see them and they get to see me, rather than just writing out a note to their provider and saying, “Here’s what I would recommend that the patient do.”

 

Intel: How have patients and providers in Alaska responded to telemedicine?

 

Pruthi: We did a survey and asked patients about their experience and whether they felt that they received the care they were expecting when they came to a virtual clinic. The result was 100 percent satisfaction by the patients. We also surveyed the providers and asked if their needs were met through the referral process. The results were that providers said they were very pleased and would recommend the service again to their patients.

 

Intel: Where would you like to see telemedicine go next?

 

Pruthi: The next level that I would love to see is the ability to go to the remote villages in the state of Alaska, where people have an even harder time coming to a medical center. I’d also like to be able to have a pre-visit with patients who may need to come in for treatment so we can better coordinate their care before they arrive.

 

Intel: When it comes to telemedicine, what keeps you up at night?

 

Pruthi: Thinking about how we can improve the patient experience. I really feel that for a patient who is dealing with an illness, the medical experience should wow them. It should be worthwhile to the patient and it should follow them on their entire journey—when they make their appointment, when they meet with their physician, when they have tests done in the lab, when they undergo procedures. Every step plays a role in how they feel when they go home. That’s what we call patient-centered care.

Read more >

MakeHers: Engaging Girls and Women in Technology through Making, Creating and Inventing

This post was written by Aysegul Ildeniz, Vice President of the New Devices Group and general manager of Strategy and Business Development at Intel Corporation. I have worked in the technology sector for more than a decade in senior leadership … Read more >

The post MakeHers: Engaging Girls and Women in Technology through Making, Creating and Inventing appeared first on CSR@Intel.

Read more >

SC14: Intel Previews Differentiated Storage Services in the Enterprise Edition for Lustre* Software.

Michael Mesnier, our guest blogger, is a Principal Engineer in Intel Labs With the explosion of big data and cloud computing, the ability to store large amounts of data efficiently has become more important than ever.  This is especially challenging in the … Read more >

The post SC14: Intel Previews Differentiated Storage Services in the Enterprise Edition for Lustre* Software. appeared first on Intel Labs.

Read more >

On the Ground at SC14: Technical Sessions, Women in Science and Technology, and the Community Hub

Apparently there’s a whole world that exists beyond the SC14 showcase floor…the technical sessions. Intel staffers have been presenting papers (on Lattice Quantum Chromodynamics and Recycled Error Bits), participating in panels (HPC Productivity or Performance) and delivering workshops (covering OpenMP and OpenCL) over the past few days, with a plethora still to come.

 

To get a flavor for the sessions, I sat in on the ACM Gordon Bell finalist presentation: Petascale High Order Dynamic Rupture Earthquake Simulations on Heterogeneous Supercomputers. It’s one of five papers in the running for the Gordon Bell award and was presented at the conference by Michael Bader from TUM. The team included scientists from TUM, LMU Munich, Leibniz Supercomputing Center, TACC, National University of Defense Technology, and Intel. Their paper details optimization of the seismic software SeisSol via Intel Xeon Phi coprocessor platforms, achieving impressive earthquake model complexity of the propagation of seismic waves. The hope is that we can use optimized software and supercomputing to understand the wave movement of earthquakes, eventually anticipating real-world consequences to help adequately prepare for and minimize after effects. The Gordon Bell prize will be announced on Thursday, so good luck to the team!

 

Michael Bader from TUM

 

From there I headed back to the Intel booth to see how the demos are helping to solve additional real-world problems. First up was the GEOS-5/University of Tennessee team, which deployed a workstation with two Intel Xeon processors E5 v3 and two Intel Xeon Phi coprocessors to run the VisIT app for visual compute analysis and rendering. GEOS-5 simulates climate variability on a wide range of time scales, from near-term to multi-century, helping scientists comprehend atmospheric transport patterns that affect climate change. A real climate model (on a workstation!) which could be used to predict something like the spread and concentration of radiation around the world.

 

Predicting Climate Change with GEOS-5

 

Next up, the Ayasdi demo on precision medicine – a data analytics platform running on the Intel Xeon processor E5 V3 and a cluster with Intel True Scale Fabric that is looking for similarities in data, rather than using specific queries as searches. The demo shows how the shape of data can be employed to find unknown insights in large and complex data sets, something like “usually three hours after this type of surgery there is a fluctuation in vitals across patients.” The goal is to combine new mathematical approaches (TDA) with big data to identify biomarkers, drug targets, and potential adverse effects to support more successful patient treatment.

 

 

Ayasdi Precision Medicine Demo

 

Since I’m usually on a plane every couple of weeks, I was excited to talk to the Onera team on how they’re using the elsA simulation software to streamline aerospace engineering. The simulation capabilities of elsA enable reductions in ground-based and in-flight testing requirements. The Onera team optimized elsA to run in a highly scalable environment of an Intel Xeon and Xeon Phi processor based cluster with Intel True Scale fabric and SSDs, allowing for large scale modeling of elsA.

 

Aerospace Design Demo from Onera

 

Up last, I headed over to the team at the Texas Advanced Computing Center to talk about their demo combining ray tracing (OSPRay) and computing power (Intel Xeon processor E5 v3) to run computational fluid dynamics simulations and assemble flow data from every pore in the rock in Florida’s Biscayne Bay. Understanding how the aquifer transports water and contaminants is critical to providing safe resources, but eventually the researchers hope to move the flow simulation to the human brain.

 

TACC Demo in Action

 

One of the areas in the Intel booth I’d yet to visit was the Community Hub, an area to socialize and collaborate on ideas that can help drive discoveries faster. Inside the Hub, Intel and various third parties are on-hand to collaborate and discuss technology directions, best known methods, future use cases, etc. of a wide variety of technologies and topics. Hopefully attendees will create, improve or expand their social network with respect to peers engaged in similar optimization and algorithm development.

 

One of the community discussions with the highest interest on Tuesday was led by Debra Goldfarb, the Senior Director of Strategy and Pathfinding Technical Computing at Intel. The Hub was packed for a session on encouraging Women in Science and Technology – the stats are pretty dismal and Intel is committed to changing that. The group brainstormed reasons for the gap and how we can begin to address it. A couple of resources for those interested in the topic: www.intel.com/girlsintech and www.womeninhpc.org.uk. Intel also attended in the “Women in HPC: Mentorship and Leadership” BOF and will participate in “Woman in HPC” panel on Friday.

 

 

Above and below: Women in Science and Technology Community Hub discussion lead by Debra Goldfarb

 

 

 

 

Women in HPC BOF

 

Community Hub discussions coming up on Wednesday include Fortran & Vectorization, OpenMP, MKL, Data Intensive HPC, Life Sciences and HPC, and HPC and the Oil and Gas industry.

 

At the other end of the booth, the Intel Parallel Universe Theater was hopping all day. I checked out a presentation from Eldon Walker of the Lerner Research Institute at the Cleveland Clinic who discussed their 1.2 petabyte mirrored storage system (DC and server room) and their 270 terabytes of Lustre storage which enables DNA sequence analysis, finite element analysis, natural language processing, image processing and computational fluid dynamics. Dr. Eng Lim Goh from SGI presented the company’s energy efficient supercomputers, innovative cooling systems, and SGI MineSet for machine learning. And Tim Cutts from Wellcome Sanger Trust made it through some audio and visual issues to present his topic on working with genomics and the Lustre file system and how they solved a couple of tricky issues (denial of service issue via samtools and performance issues with concurrent file access).

 

Eldon Walker, Lerner Research Institute

 

Dr. Eng Lim Goh, SGI

 

 

Tim Cutts, Wellcome Trust Sanger

 

And lastly, for those following along with the Intel Parallel Universe Computing Challenge – in match two, The Brilliant Dummies from Korea defeated the Linear Scalers from Argonne by a score of 5790 to 3588. And in match three, SC3 (Latin America) fell to the Coding Illini (NCSA and UIUC) with a score of 2359 to 5359, which means both the Brilliant Dummies and Coding Illini move on in the Challenge. Match 4 and 5 will be up on Wednesday. See you in booth 1315!

 

Read more >

The World is Your Office: A Study in Telecommuting

macdesk.gifImage source: bestreviews.com


If you look down at your workspace right now and analyze the way it has changed in the past few decades, you’ll likely be amazed by the contrast. Technology has given us the capacity to eliminate waste and optimize our workplaces for productivity, but it has also fundamentally changed the way we work. Less ties to a physical desk in a physical workspace has led to an upswing in the mobile workforce. According to the “The State of Telework in the U.S.” — which is based on U.S. Census Bureau statistics — businesses saw a 61% increase in telecommuters between 2005 and 2009.

 

IT decision makers have witnessed this growth from the trenches, where they enable the business to grow through technological advancements.  But there are several key questions IT leaders will face in the coming waves of virtualization…

 

  • What type of work model should be used to manage knowledge workers?
  • When workers are increasingly distributed globally at multiple physical locations, how do effective interpersonal relationships form and grow?
  • How will technology and people considerations impact the locations where people come together?
  • How can the office environment be configured to invoke optimum worker productivity?
  • How will organizations source the best workers and cope with differing attitudes across a five-generation workforce?

 

Telecommuters Today

 

Though there are a significant number of mobile workers today, the number is still small in comparison to what it will be one day. According to the “The State of Telework in the U.S.” 50 million U.S. employees work jobs that are telework compatible, but only 2.9 million consider home their primary place of work. This represents 2.3 percent of the workforce. Meaning the full impact of virtualization has yet to be realized.

 

Some are dubious as to whether the workplace will continue to move in a virtualized direction. Rawn Shah, director and social business architect at Rising Edge, recently wrote on Forbes, “We are only starting to understand what the future of work looks like. In my view, the imagined idea of entirely virtual organizations is similar to how we used to think of the future as full of flying cars and colonies in space. Reality is much more invested in hybrid in-office plus remote scenarios. Physical space is still a strong element of work that we need to keep track of, and understand better to learn how we truly collaborate.”

 

Telecommuters Tomorrow

 

According to Tim Hansen in his white paper “The Future of Knowledge Work,” there are already several trends influencing the current workplace that will directly impact virtualization of the enterprise in the future:

 

  • Defining employees on the cusp of transformation
  • Dynamic, agile team structures will become the norm
  • The location of work will vary widely
  • Smart systems will emerge and collaborate with humans
  • A second wave of consumerization is coming via services

 

The questions IT leaders are asking now can be answered by isolating these already-present factors driving virtualization.

 

Our offices are changing rapidly — don’t let your employees suffer through legacy work models. Recognizing the change swirling around you will help you strategize for the coming changes on the horizon.

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

Chief Human Resources Officers will be the Next Security Champion in the C-suite

HR and security? Don’t be surprised. Although a latecomer to the security party, HR organizations can play an important role in protecting assets and influencing good security behaviors. They are an influential force when managing risks of internal threats and excel at the human aspects which are generally snubbed in the technology heavy world of cybersecurity. At a recent presentation given to the CHO community, I discussed several overlapping areas of responsibilities which highlight the growing importance HR can influence to improve the security posture of an organization. 

 

The audience was lively and passionate in their desire to become more involved and apply their unique expertise to the common goal.  The biggest questions revolved around how best they could contribute to security.  Six areas were discussed.  HR leadership can strengthen hiring practices, tighten responses for disgruntled employees, spearhead effective employee security education, advocate regulatory compliance and exemplify good privacy practices, be a good custodian of HR data, and rise to the challenges of hiring good cybersecurity professionals.  Wake up security folks, the HR team might just be your next best partner and a welcomed advocate in the evolving world of cybersecurity

 


Pivotal-Role-of-HR-in-Cybersecurity from Matthew Rosenquist

 

 

Presentation available via SlideShare.net: http://www.slideshare.net/MatthewRosenquist/pivotal-role-of-hr-in-cybersecurity-cho-event-nov-2014

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

My Blog: Information Security Strategy

Read more >

SC14: Understanding Gene Expression through Machine Learning

This guest blog is by Sanchit Misra, Research Scientist, Intel Labs, Parallel Computing Lab, who will be presenting a paper by Intel and Georgia Tech this week at SC14.

 

Did you know that the process of winemaking relies on yeast optimizing itself for survival? When we put yeast in a sugar solution, it turns on genes that produce the enzymes that convert sugar molecules to alcohol. The yeast cell makes a living from this process (by gaining energy to multiply) and humans get wine.

 

This process of turning on a gene is called expression. The genes that an organism can express are all encoded in its DNA. In multi-cellular organisms like humans, the DNA of each cell is the same, but cells in different parts of the body express different genes to perform the corresponding functions. A gene also interacts with several other genes during the execution of a biological process. These interactions, modeled mathematically using “gene networks,” are not only essential in developing a holistic understanding of an organism’s biological processes, they are invaluable in formulating hypotheses to further the understanding of numerous interesting biological pathways, thus playing a fundamental role in accelerating the pace and diminishing the costs of new biological discoveries. This is the subject of a paper presented at the SC14 by Intel Labs and Georgia Tech.

 

Owing to the importance of the problem, numerous mathematical modeling techniques have been developed to learn the structure of gene networks. There appears, not surprisingly, to be a correlation between the quality of learned gene networks and the computational burden imposed by the underlying mathematical models. A gene network based on Bayesian networks is of very high quality but requires a lot of computation to construct. To understand Bayesian networks, consider the following example.

 

A patient visits a doctor for diagnosis with symptoms A, B and C. The doctor says that there is a high probability that the patient is suffering from ailments X or Y and recommends further tests to zero in on one of them. What the doctor does is an example of probabilistic inference, in which the probability that a variable has a certain value is estimated based on the values of other related variables. Inference that is based on Bayes’ laws of probability is called Bayesian inference. The relationships between variables can be stored in the form of a Bayesian network. Bayesian networks are used in a wide range of fields including science, engineering, philosophy, medicine, law, finance, etc. In the case of gene networks, the variables are genes and the corresponding Bayesian network models for each gene what other genes are related to it and what is the probability of expression of the gene given the expression values of the related genes.

 

Through a collaboration between Intel Labs’ Parallel Computing Lab and researchers at Georgia Tech and IIT Bombay, we now have the first ever genome-scale approach for construction of gene networks using Bayesian network structure learning. We have demonstrated this capability by constructing the whole-genome network of the plant Arabidopsis thaliana from over 168.5 million gene expression values by computing a mathematical function 7.3 trillion times with different inputs. For this, we collected a total of 11,760 Arabidopsis gene expression datasets (from NASC, AtGenExpress and GEO public repositories). A problem of this scale would have consumed about six months using the state-of-the-art solution. We can now solve the same problem in less than 3 minutes!

 

To achieve this, we have not only scaled the problem to a much bigger machine – 1.5 million cores of Tianhe-2 supercomputer with 28 PFLOP/s peak performance, we also applied algorithm-level innovations including avoiding redundant computation, a novel parallel work decomposition technique and dynamic task distribution. We also made implementation optimizations to extract maximum performance out of the underlying machine.

 

sanchit image3.jpg

 

sanchit image 2.jpg

 

  • (Top)    Root Development subnetwork                 (Bottom) Cold Stress subnetwork

 

Using our software, we generated gene regulatory networks for several datasets – subsets of the Arabidopsis dataset – and validated them using known knowledge from the TAIR (The Arabidopsis Information Resource) database. As a demonstration of the validity and how genome-scale networks can be used to aid biological research, we conducted the following experiment. We picked the genes that are known to be involved in root development and cold stress and randomly picked a subset of those genes (red nodes in the above figures). We took the whole-genome network generated by our software for Arabidopsis and extracted subnetworks that contain our randomly picked subset of genes and all the other genes that are connected to them. The extracted subnetworks contain a rich presence of other genes known to be in the respective pathways (green nodes) and closely associated pathways (blue nodes), serving as a validation test. The nodes shown in yellow are genes with no known function. Their presence in the root development subnetwork indicates they might function in the same pathway. The biologists at Georgia Tech are performing experiments to see if the genes corresponding to yellow nodes are indeed involved in root development. Similar experiments are being conducted for several other biological processes.

 

Arabidopsis is a model plant for which NSF had launched a 10 year initiative in 2000 to find the functions of all of its genes, yet the functions of 40 percent of its genes are still not known. This method can help accelerate the discovery of the functions of the rest of the genes. Moreover, it can easily be scaled to other species including human beings. The understanding of how genes function and interact with each other in a broad variety of organisms can pave the way for new medicines and treatments. Moreover, we can also compare the gene networks across organisms to enhance our understanding of the similarities and differences between them ultimately aiding in a deeper understanding of evolution.

 

What questions do you have?

Read more >

On the Ground at SC14: Opening Plenary Session and Exhibition Opening Gala

I felt a little like the lady from the old Mervyn’s commercials chanting, “OPEN, OPEN, OPEN” today while waiting for the Exhibition Gala at SC14. The exhibitor’s showcase is one of the most exciting aspects for Intel – we have a pretty large presence on the floor so we can fully engage and collaborate with the HPC community. But before we delve too deep into the booth activities, I want to step back and talk a little about the opening plenary session from SGI.

 

Dr. Eng Lim Goh, senior vice president and CTO at SGI, took the stage to talk about the most fundamental of topics: Why HPC Matters. While most of the world thinks of supercomputing as the geekiest of technology (my bus driver asked if I worked on the healthcare.gov site or did some hacking), we as an industry know that much of what is possible today in the world is enabled by HPC in industries as diverse as financial services, advanced/personalized medicine, and manufacturing.

 

Dr. Goh broke his presentation into a few parts: Basic needs, reducing hardships, commerce, entertainment and profound questions. He then ran through about 25 projects utilizing supercomputing, everything from sequencing and analyzing the wheat genome (7x the size of the human genome!) to checking postage accuracy for the USPS (half a billion pieces of mail sorted every day) to designing/modeling a new swimsuit for Speedo (the one that shattered all those world records in the Beijing Olympics). Dr. Goh was joined on stage by Dr. Piyush Mehrotra, from NASA’s Advanced Supercomputing Division, who was there to discuss some of the ground breaking research that NASA has done in climate modeling and the search for exoplanets (about 4,000 possible planets found so far by the Kepler Mission).

 

Increasing wheat yield by analyzing the genome

 

Earthquake simulations can help give advanced warning

 

The session closed with a call to the industry to make a difference and to remember that it’s great to wow a small group of people to secure funding for supercomputing, but it is also important to, in the simplest terms, “delight the many” when describing why HPC matters.

 

So why does HPC matter in the oil and gas industry? After Dr. Goh’s presentation, I finally headed into the showcase and to the Intel booth to talk to the folks from DownUnder GeoSolutions. The key to success in the oil and gas industry is minimizing exploration costs while maximizing oil recovery. DownUnder GeoSolutions has invested in modernizing its software—optimizing it to run heterogeneously on Intel Xeon and Intel Xeon Phi coprocessors. As a result, its applications are helping process larger models and explore more options in less time. DUG is the marque demo this year in the Intel booth, showing their software, DUG Insight, running on the full Intel technical computing portfolio, including workstations, Intel Xeon and Xeon Phi processors, Lustre, Intel Solid State Drives and Intel True Scale Fabric.

 

 

Above and below: DownUnder GeoSolutions demo

 

 

Of course, checking out the DUG demo isn’t the only activity in the Intel booth. There were also a couple of great kick off theater talks from Jack Dongarra discussing the MAGMA project, which aims to develop a dense linear algebra library and improve performance for co-processors and Pierre Lagier from Fujitsu on The 4 Dimensions of HPC Computing. He presented a use case for the running elsA CFD software package on Intel Xeon Phi co-processors and the performance gains they were able to see with some tuning and optimization.

 

Jack Dongarra on the MAGMA project

 

Pierre Lagier on elsA CFD

 

And speaking of optimization, the big draw of the night in the Intel booth was the opening round of the Parallel Universe Computing Challenge, which saw defending champs the Gaussian Elimination Squad from Germany taking on the Invincible Buckeyes from Ohio. After a round of 15 HPC trivia questions (more points the faster teams answer), GES was in the lead. During the coding challenge, each team has 10 minutes to take a piece of code from Intel’s James Reinders and speed up either/both Xeon and Xeon Phi performance with 40 Xeon and 244 Xeon Phi threads available on a duel-socket machine. With a monster speed up of 243.008x on Xeon Phi (James admitted he’d only gotten to 189x), the Gaussian Elimination Squad took home the victory by a final score of 5903 to 3510. A well-played match by both teams!

 

Crowd watching the PUCC

 

L to R: Gaussian Elimination Squad, James Reinders and Mike Bernhardt

 

The PUCC continues on Tuesday, along with the Community Hub discussions, theater talks, fellow traveler tours and technical sessions. Stop by the booth (1315) and tell us why you think HPC matters!

 

 

Read more >