ADVISOR DETAILS

RECENT BLOG POSTS

SC14 Podcast: How HPC Impacts Alzheimer’s Research

 

With SC14 kicking off today, it’s timely to look at how high performance computing (HPC) is impacting today’s valuable life sciences research. In the above podcast, Dr. Rudy Tanzi, the Joseph P. and Rose F. Kennedy Professor of Neurology at Harvard Medical School and the Director, Genetics and Aging Research Unit at the MassGeneral Institute for Neurodegenerative Disease, talks about his pioneering research in Alzheimer’s disease and how HPC is critical to the path forward.

 

Listen to the conversation and hear how Dr. Tanzi says HPC still has a ways to go to provide the compute power that life sciences researchers need. What do you think?

 

What questions about HPC do you have? 

 

If you’re at SC14, remember to come by the Intel booth (#1315) for life sciences presentations in the Intel Community Hub and Intel Theater. See the schedules here.

Read more >

SGI has built a revolutionary system for NVMe storage scaling at SC14!

Intel launched its Intel® Solid-State Drive Data Center Family for PCIe  based on the NVMe specification in June, 2014. But in the world of amazing possibilities with these products, we still want more. Why? A half million IOPS and three gigs per second out of a single card is not enough for Super Compute workloads, right? Not always, and not for every application. Here are a couple of reasons why we need more performance and how that’s possible.  We really want to scale both the performance and the density.

 

Consistent performance is the answer to the question. Intel SSDs help to deliver consistent performance across different workloads, including mixed ones, which is worst-case scenario for a drive. That’s applicable to a wide range of Data Center products no matter SATA or PCIe. Performance scaling of SATA SSDs is limited by HBA or RAID controller performance, SAS topology and related interface latency. You can scale it linearly in a limited range until the threshold is reached. After that you realize nothing but increased access latency for the RAID configuration. The single Intel PCIe SSD (our P3700) product line can outperform at least 6 SATA SSDs (S3700) on a range of 4K random workloads while maintaining a lower latency than a single SATA SSD. (See the diagram below)

01.png

1 Source: Intel. Measurements made on Hanlan Creek (Intel S5520HC) system with two Intelâ Xeon X5560@ 2.93GHz and 12GB (per CPU) Mem running RHEL6.4 O/S, Intel S3700 SATA Gen3 SSDs are connected to LSI* HBA 9211, NVMe* SSD is under development, data collected by FIO* tool

 

But then how does the performance scale with multiple drives within one system? Given the benefit of latency reduction due to transition to PCIe interface, NVMe protocol and high QoS of the P3x00 product line, it’s hard to predict how far we can take this.

 

Obviously, we have a limited amount of PCIe lanes per CPU, which depends on the generation and CPU architecture as well as system, thermal and power architecture. Each P3700 SSD takes PCIe Gen3 x4. In order to evaluate the scaling of NVMe SSDs we would like to avoid using PCIe switches and multiplexers. How about a big multi-socket scale-up system based on 32 Xeon E7 CPUs as a test platform? Looks very promising for the investigation of the NVMe scaling.

 

SGI has presented an interesting All-Flash concept at SC14. It includes a 32 socket system with 64 Intel® Solid-State Drive DC P3700 800GB SSDs, running a single SLES 11 SP3 Linux OS.

http://www.intel.com/content/www/us/en/solid-state-drives/intel-ssd-dc-family-for-pcie.html

 

That offers a great opportunity to see what happens with the performance scaling inside this massive single image system. Turns out, it’s a true record of 30M IOPS on 4K RR workload! Let’s have a look at the scaling progression here:

03.png

The data above was measured at SGI Labs on concept platform based on 32 E7 Xeon CPUs.

 

This chart represents IOPS and GB/s to the number of SSDs scaling on 4k random read workloads (blue line) and 128K random read workload (red line), from the testing done at SGI’s labs. Each SSD on the add-in-card form factor of PCIe card works independently. It’s like its own controller. We’re not worried about an additional software RAID overhead, and are only interested to see the raw device performance. Dotted lines represent linear approximation while the solid lines connect the dots from experimental tests.

 

Hard to believe? Come to SGI’s booth 915 and talk to them about it.

Read more >

Better Patient Care Starts With Better Technology

Home healthcare practitioners need efficient, reliable access to patient information no matter where they go, so they need hardware solutions that meet their unique needs. Accessing critical patient information, patient file management, seamless multitasking and locating a patient’s residence, are daily tasks for mobile healthcare professionals. Mobile practitioners don’t have access to the same resources they would if they were working in a hospital, so the tools they use are that much more critical to accomplishing their workload. Fortunately, advances in mobile computing have created opportunities to bridge that gap.

 

An Evolved Tablet For Healthcare Providers Healthcare image.png


As tablets have evolved, they’ve become viable replacements for clunky laptops. Innovation in the mobile device industry has transformed these devices from media consumption platforms and calendar assistants into robust workhorses that run full-fledged operating systems. However, when it comes to meeting the needs of home healthcare providers, not all tablets are created equal.

                    

A recent Prowess Consulting comparison looked at two popular devices with regards to tasks commonly performed by home healthcare workers. The study compared an Apple® iPad Air™ and a Microsoft® Surface™ Pro 3 to determine which device offers a better experience for home healthcare providers, and ultimately, their patients.

 

Multitasking, Done Right

 

One of the biggest advantages to the Surface™ Pro 3 is its ability to let users multitask. For example, a healthcare worker can simultaneously load and display test results, charts, and prescription history via the device’s split-screen capabilities. A user trying to perform the same tasks on the iPad would find themselves running into the device’s limitations; there are no split-screen multitasking options on the iPad Air™.

 

The Surface™ Pro 3’s powerful multitasking abilities combined with the ability to natively run Microsoft Office gives home healthcare providers the ability to focus more time on patient care and less time on administrative tasks. Better user experience, workflow efficiency, file access speed, and split-screen multitasking all point to the Microsoft® Surface™ Pro 3 as the better platform for home healthcare providers.

 

For a full rundown of the Surface™ Pro 3’s benefits to home healthcare workers, click here.

Read more >

Talk Innovation and Pathfinding with Intel at SC14

Karl Solchenbach is the Director of IPAG Europe, including Intel’s Exascale Labs in EMEA

 

This year the annual HPC and supercomputer conference and exhibition will be back in New Orleans, returning to the city after a four-year absence. From Nov. 16-21, SC14 will host more than 10,000 participants, to exchange the newest results in HPC research, to meet their world-wide peers, and to gain information about new and innovative HPC products. For HPC vendors, SC14 is the biggest forum to present new HPC hardware, software and innovations, and a unique opportunity to meet their global customers.

 

As usual, Intel will have a large booth with many activities highlighting the pace of discovery and innovation including compelling demos showcasing topics like climate and environmental modeling and airflow simulations from end users, as well as interesting informal Collaboration Hub discussions with Intel and industry experts, and short theater presentations on a variety of topics surrounding code modernization. A schedule of Intel activities can be found here.

 

Intel‘s Innovation, Architecture and Pathfinding Group (IPAG), led by Curt Aubley (VP and CTO of the Data Center Group and GM of IPAG) will have a strong presence at SC14. This group is looking into the future of HPC and exascale computing, with a focus on low-power processors and interconnects, innovative SW concepts and various technology projects with the U.S. Government and in Europe. Come and meet IPAG engineers and architects to discuss recent developments:

  • IPAG will be holding a BOF session on PGAS APIs. While the PGAS model has been around for some time, widespread adoption of PGAS by HPC developers is light. Advances in PGAS APIs promise to significantly increase PGAS use, avoiding the effort and risk involved in adopting a new language. This BOF (Wed 5:30-7:00pm) gives a concise update on progress on PGAS communication APIs and presents recent experiences in porting applications to these interfaces.
  • One of IPAG’s collaborations in Europe with CERN concerns “Data Intensive HPC”, which is relevant in scenarios like the CERN’s Large Hadron Collider (LHC) or the Square Kilometer Array (SKA). Niko Neufeld from CERN will present details at the Intel theatre (Wed at 5:45). In addition we will host a “Community Hub” discussion at the Intel booth (Wed 10-12am) with Happy Sithole, one of the thought leaders of the SKA project. These are informal discussions, meant to generate interest and thought exchange.
  • Another example of IPAG’s engagements in Europe is the Dynamical Exascale Entry Platform (DEEP) project, funded by the EU 7th framework programme, www.deep-project.eu. The goal is to develop a novel, Exascale-enabling supercomputing platform. At SC14 DEEP will present its results at the joint booth of the European Exascale projects booth 1039. Also at booth 1039, project EXA2CT (EXascale Algorithms and Advanced Computational Techniques) will give a status update on modular open source proto-applications, with Intel IPAG as a key partner.
  • Shekhar Borkar (Intel Fellow and Director of Extreme-scale Technologies) will be sitting in on the Future of Memory Technology for Exascale and Beyond II panel on Wednesday at 3:30 in room 383-84-85. The panel will be discussing how memory technology needs to evolve to keep pace with compute technology in the coming exascale era.
  • The IPAG team is also participating in a BOF session on Thursday at 12:15 in room 294 on a future runtime standard for HPC exascale (ExaFLOPS) machines. This is the Open Community Runtime (OCR) work being developed as a new industry standard, supported by the US Dept. of Energy.

 

Stop by booth 1315 to engage with the IPAG (and the larger Intel HPC team) on any of these topics. We hope to see you in New Orleans!

Read more >

Super Compute is rising, host bus adaption (HBA’s) will start fading

The world of storage is tiered, and it will become more distinctly tiered in the years ahead as the ability to manage hot data will evolve onto the PCIe bus and away from the sub-optimal SAS and SATA buses designed for traditional disk platter based storage. Using the right bus for your hot tier is very important. Up till 2014 most implementations of SSD storage have been on SAS and SATA buses which are not designed for fast non-volatile memory (NVM). What’s been required is more standardization around the actual processor’s host bus, called PCIe. A generational shift towards PCIe from Intel is now evolving.

storagevision.png


Intel is evolving the world of PCIe and its extensibility across the necessary layers, so that PCIe can truly be a more appropriate storage bus for going wide with more devices. Thus blending network, storage, co-processors and graphics devices all via this host bus. The classic storage need for adaption from SAS or SATA back up towards PCIe and the processor will slowly fade as the server generations evolve in the years ahead. We’ll see Intel-backed standards, platform additions and PCIe bridges and switches, which will start the unstoppable evolutions of putting storage closer with much less storage latency to the processor.

 

Tomorrow’s super and parallel computing can only be made a reality with denser, more efficient compute power. NVM storage will play its part by being on the CPU’s host bus. This storage will be more power efficient and parallel. The future has started with Intel SSD for PCIe, come check us out at the Intel booth #1315 at SC 2014 and talk to Andrey Kudryavtsev, John Ryan or myself. We’ll be there live to show you demo’s, samples and explain the system shifts and the benefits of the new storage protocol standard, NVMe as well.

Read more >

SC14: The Analysis Challenge of the $1,000 Genome

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Mikael Flensborg. Director, Global Partner Relations at CLC bio, a Qiagen Company. During SC14, Mikael will be sharing his thoughts on genomic and cancer research in the Intel booth (#1315). He is scheduled in the Intel Community Hub on Tuesday, Nov. 18, at 3 p.m. and Wednesday, Nov. 19, at 3 p.m., plus in the Intel Theater on Tuesday at 2:30 p.m.

 

Eight months have now passed since Illumina announced the long expected arrival of the $1,000 genome with the launch of the HiSeq X Ten sequencing instrument, which is also denoted as a new era in High Throughput Sequencing with focus on a new wave of population-level genomic studies. MF-2010.png

 

In order to keep the costs down to the “magic” $1,000 level, it is required to have a full HiSeq X Ten installation plow through vast 18,000 full human genomes per year, which means a completion of each full run every 32 minutes. With focus on such a high volume, the next very important question arrives:

 

What does it take to keep up with such a high throughput on the data analysis side?

 

According to Illumina’s “HiSeq X Ten Lab Setup and Site Prep Guide (15050093 E)”, the requirements for data analysis are specified to be a compute cluster with 134 compute nodes (16 CPU cores @ 2.0 GHz, 128 GB of memory, 6 x 1 terabyte (TB) hard drives) based on an analysis pipeline consisting of the tools BWA+GATK.

 

At QIAGEN Bioinformatics we decided to take on the challenge of benchmarking this, based on a workflow (Trim, QC for sequencing reads, Read Mapping to Reference, Indels and Structural Variants, Local Re-alignment, Low Frequency Variant Detection, QC for Read Mapping) of tools on CLC Genomics Server  (http://www.clcbio.com/products/clc-genomics-server/) running on a compute cluster with Intel® Enterprise Edition for Lustre* filesystem, InfiniBand, Intel® Xeon® Processor E5-2697 v3 @ 2.60GHz, 14 CPU cores, 64GB of memory, and Intel® SSD DC S3500 Series 800GB.

 

We based our tests on a publicly available HiSeq X Ten dataset  and we have reached the conclusion that based on these specifications we can follow the pace of the instrument with a compute cluster of only 61 compute nodes.

 

Given our much lower compute node needs, these results can have a significant positive impact on the total cost of ownership of the compute infrastructure for a HiSeq X Ten customer, which includes hardware, cooling, space, power, and systems maintenance to name a few variable costs.

 

What questions do you have?

Read more >

Capital Summit – Bringing Together the Forces of Innovation

I just spent the past week at the Intel Capital Global Summit. It was an excellent event where companies interested in innovation, venture capitals, and startups met to network and discuss new trends. Overall, this experience served as proof that innovation is still alive and well around the world.

 

If you have seen any of my past blogs on the topic of innovation, you will know that I believe there are three pillars necessary for innovation:

 

  1. Commitment: It is important that innovation is championed through executive support and ultimately with an investment of funding and resources.
  2. Clarity: An understanding of which specific problems need to be solved and how to fail fast to eventually get to the solution is vital for innovation.
  3. Culture: The organization needs to be supported in the area of failure. It is through trial and error along with the eventual learnings that are derived from failure that encourages innovation.

 

It was exciting to see all three demonstrated very clearly at the Intel summit.

Ed.jpg

Innovation Starts with Executive Understanding…

 

Through a series of organized meet and greet sessions, I had the opportunity to talk with many companies at the event. It was incredible to see the level of clarity demonstrated by the CEOs and executives of some of these companies. Plans of development and go-to-market strategies were well defined and clear. Additionally, these company leaders displayed an exceptional understanding of what problems they’re working on and the details on how they’re solving them.

 

But in every one of these cases, there was a common belief that the real innovation begins once the customer gets a hold of new technology. This is the point at which true understanding and the collision of ideas can occur. The specific problems are discovered as customers bring additional information to the discussion that can help companies hone in on legitimately scalable solutions.

 

…And a Company Culture That Embraces Strategic Change

 

Throughout the event, companies also met with each other to discuss how technology can be used to enhance solutions and better address some of the real problems faced by customers. It was apparent from the discussions that all of the CEOs were passionate about solving customer problems with the technologies that they are using.

 

This concept of ideas coming together to enhance and evolve a solution is very well outlined in Stephen Johnson’s video on the “slow hunch.” Rare is the occasion when someone conceives a brilliant idea in the shower (think Doc Brown in “Back to the Future”). More common is the process of a great idea starting from a seed, growing through a wide range of interactions, and eventually developing into something that is key to individual or company success.

 

Interested in innovation and the world of venture capital? Consider the Intel Capital Global Summit for next year. It can prove to be a significant gateway to network with these innovative companies. See how they can help you and how you can help them.

 

See you there,


Ed

 

Follow me on Twitter at @EdLGoldman and use #ITCenter to continue the conversation.

Read more >

SC14: When to Launch an Internal HPC Cluster

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Eldon M. Walker, Ph.D., Director, Research Computing at Cleveland Clinic’s Lerner Research Institute. During SC14, Eldon will be sharing his thoughts on implementing a high performance computing cluster at the Intel booth (#1315) on Tuesday, Nov. 18, at 10:15 a.m. in the Intel Theater.


When data analyses grind to a halt due to insufficient processing capacity, scientists cannot be competitive. When we hit that wall at the Cleveland Clinic Lerner Research Institute, my team began consideration of the components of a solution, the cornerstone of which was a high performance computing (HPC) deployment.

 

In the past 20 years, the Cleveland Clinic Lerner Research Institute has progressed from a model of wet lab biomedical research that produced modest amounts of data to a scientific data acquisition and analysis environment that puts profound demands on information technology resources. This manifests as the need for the availability of two infrastructure components designed specifically to serve biomedical researchers operating on large amounts of unstructured data:

 

  1. A storage architecture capable of holding the data in a robust way
  2. Sufficient processing horsepower to enable the data analyses required by investigators

 

Deployment of these resources assumes the availability of:

 

  1. A data center capable of housing power and cooling hungry hardware
  2. Network resources capable of moving large amounts of data quickly

 

These components were available at the Cleveland Clinic in the form of a modern, tier 3 data center and ubiquitous 10 Gb / sec and 1 Gb / sec network service.

 

The storage problem was brought under control by way of 1.2 petabyte grid storage system in the data center that replicated to a second 1.2 petabyte system in the Lerner Research Institute server room facility. The ability to store and protect the data was the required first step in maintaining the fundamental capital (data) of our research enterprise.

 

It was equally clear to us that the type of analyses required to turn the data into scientific results had overrun the capacity of even high end desktop workstations and single unit servers of up to four processors. Analyses simply could not be run or would run too slowly to be practical. We had an immediate unmet need in several data processing scenarios:

 

  1. DNA Sequence analysis
    1. Whole genome sequence
      1. DNA methylation
    2. ChIP-seq data
      1. Protein – DNA interactions
    3. RNA-seq data
      1. Alternative RNA processing studies
  2. Finite Element Analysis
    1. Biomedical engineering modeling of the knee, ankle and shoulder
  3. Natural Language Processing
    1. Analysis of free text electronic health record notes

 

There was absolutely no question that an HPC cluster was the proper way to provide the necessary horsepower that would allow our investigators to be competitive in producing publishable, actionable scientific results. While a few processing needs could be met using offsite systems where we had collaborative arrangements, an internal resource was appropriate for several reasons:

 

  1. Some data analyses operated on huge datasets that were impractical to transport between locations.
  2. Some data must stay inside the security perimeter.
  3. Development of techniques and pipelines would depend on the help of outside systems administrators and change control processes that we found cumbersome; the sheer flexibility of an internal resource built with responsive industry partners was very compelling based on considerable experience attempting to leverage outside resources.
  4. Given that we had the data center, network and system administration resources, and given the modest price-point, commodity nature of much of the HPC hardware (as revealed by our due diligence process), the economics of obtaining an HPC cluster were practical.

 

Given the realities we faced and after a period of consultation with vendors, we embarked on a system design in collaboration with Dell and Intel. The definitive proof of concept derived from the initial roll out of our HPC solution is that we can run analyses that were impractical or impossible previously.

 

What questions do you have? Are you at the point of considering an internal HPC cluster?

Read more >

SC14: The Bleeding Edge of Medicine

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Karl D’Souza, senior user experience specialist at Dassault Systèmes Simulia Corp. Karl will be speaking about the Living Heart Project noted below during SC14 at the Intel booth (#1315) on Wednesday, Nov. 19, at 12:15 p.m. in the Intel Theater and 1 p.m. in the Intel Community Hub.


Computer Aided Engineering (CAE) has become pervasive in the design and manufacture of everything from jumbo jets to razor blades, transforming the product development process to produce more efficient, cost effective, safe and easy to use products. A central component of CAE is the ability to realistically simulate the physical behavior of a product in real world scenarios, which greatly facilitates understanding and innovation. karl2.jpg

 

Application of this advanced technology to healthcare has profound implications for society, promising to transform the practice of medicine from observation driven to understanding driven. However, lack of definitive models, processes and standards has limited its application, and development has remained fragmented in research organizations around the world.

 

In January of 2014, Dassault Systèmes took the first step to change this and launched the “Living Heart Project” as a translational initiative to partner with cardiologists, researchers, and device manufacturers to develop a definitive realistic simulation of the human heart. Through this accelerated approach, the first commercial model-centric, application-agnostic, multi-physical whole heart simulation has been produced.

 

Since cardiovascular disease is the number one cause of morbidity and mortality across the globe, Dassault Systèmes saw the Living Heart Project as the best way to address the problem. Although there is a plethora of medical devices, drugs, and interventions, physicians face the problem of determining which device, drug, or intervention to use on which patient. Often times to truly understand what is going on inside a patient invasive procedures are needed.

 

CAE and the Living Heart Project will enable cardiologists to take an image (MRI, CT, etc) of a patient’s heart and reconstruct it on a 3D model thereby creating a much more personalized form of healthcare. The doctor can see exactly what is happening in the patient’s heart and definitively make a more informed decision of how to treat that patient most effectively.

 

If you will be at SC14 next week, I invite you to join me when I present an overview of the project, the model, results, and implications for personalized healthcare. Come by the Intel booth (1315) on Wednesday, Nov. 19, for a presentation at 12:15 p.m. in the Intel Theater immediately followed by a Community Hub discussion at 1 p.m.

 

What questions do you have about computer aided engineering?

Read more >

Meet Me @ “The Hub” – Discover, Collaborate, and Accelerate at SC14

“The Hub” is a social invention created by Intel for you to socialize and collaborate on ideas that can help drive your discoveries faster. It is located in the Intel booth (1315) at SC14.

Parallelization and vectorization are just not that easy. It is even harder if you try to do it alone. At “The Hub,” you will have the opportunity to listen, learn and share what you and your peers have experienced. The goal is to help you create, improve or expand your social network with respect to peers engaged in similar optimization and algorithm development. Intel will be providing discussion leaders to get the conversation started on various topics including OpenMP, MKL, HPC, Lustre, and fabrics. We will also be holding discussions on the intersection of HPC and specific vertical segments (life sciences, oil and gas, etc.), as well as holding special events in the collaboration hub, including a book signing with James Reinders and Jim Jeffers and a discussion on women in the science and technology.

If you’re heading to the show, stop by and “Meet Us @ The Hub” in booth 1315 for a challenging and intellectual opportunity to talk with your peers about parallelization, vectorization and optimization.

To see the full list of Hub activities, times and topics, check out our schedule.

Read more >

Bringing the Yin and Yang to Supercomputing

To complete the representation of teams from across the globe, Professor An Hong of the University of Science and Technology of China (USTC) has put together a team comprised of Masters and PhD students and professors from four of China’s prestigious universities. We caught up with her in the midst of preparing for HPC China14, PAC14 Parallel challenges, and the Student Cluster Competition at SC14.

Q1: The growth of the supercomputing industry in China is certainly obvious from the Tianhe-2 and Tianhe-1 supercomputers which are currently ranked #1 and #14 on the Top 500 supercomputer list (http://www.top500.org/list/2014/06/). What impact has this had on computer science at USTC?
A1: With the growth of the supercomputing industry in China, it is apparent that computer science education and research, not only in USTC but also in other universities in China, can be furnished with abundant supercomputing resources. But, it is not to imply that the HPC education and research have achieved to top level. Ranked#1 on the TOP500 supercomputer list may be more money sense than technology progress. So, as professors in computer science in developing China, we should let the students know the value of higher computer science education to be pursued.

 

Q2: What is the meaning or significance of the team name you have chosen (“Taiji”)?
A2: In Chinese philosophy, yin & yang, which are often shortened to “yin-yang” or “yin yang”, are concepts used to describe how apparently opposite or contrary forces are actually complementary, interconnected and interdependent in the natural world, and how they give rise to each other as they interrelate to one another.
Taiji (☯) comes about through the balance of yin and yang. Their complementary forces that interact to form a dynamic system, is the same as the transformation and combination with each other between 0 and 1 that form a computer system.

 

Taiji’s symbol is composed of yin (black) and yang (white), “black (yin)” can denote “0”, “white (yang)” can denote “1”, then all things in the world can derive from “0” and “1”. The Taiji’s philosophy coming from ancient China about seven thousand years ago, may inspire us how to build a balanced computer system from bits to gates and high level beyond.

 

Q3: What are the names and titles of the other team members who will be participating in the PUCC?
A3:

 

Name

Title

Organization

Activity participation

AN Hong

Professor

University of Science and Technology of China

Team Captain,

the code optimization challenge

Liang Weihao

Master Student

University of Science and Technology of China

the code optimization challenge

Chen Junshi

PhD Student

University of Science and Technology of China

the code optimization challenge

Li Feng

PhD Student

University of Science and Technology of China

the code optimization challenge

Shi Xuanhua

Professor

Huazhong University of Science and Technology

the trivia challenge

Jin Hai

Professor

Huazhong University of Science and Technology

the trivia challenge

Lin Xinhua

Professor

Shanghai Jiao Tong University

the trivia challenge

Liang Yun

Professor

Peking University

the trivia challenge

 

Q4: Why is participating in the PUCC important to USTC?
A4: Most team’s members come from USTC. USTC’s mission has been to “focus on frontier areas of science and technology and educate top leaders in science and technology for China and the world”. Central to its strategy has been the combination of education and research, as well as the emphasis on quality rather than quantity. Led by the most renowned Chinese scientists of the time, USTC set up a series of programs creatively encompassing frontier research and development of new technology.

 

The PUCC is a great, interesting activity, not only giving us the opportunity to showcase Intel’s technology innovation but also to give expression to the attention for Chinese Children and Youth Science endeavors. So, we appreciate the value of the PUCC to our professors and students.

 

Q5: The Supercomputing conference is using the theme “HPC Matters”. Can you tell me why you think HPC matters?
A5: All the HPCers come together to solve some of the critical problems in the world that matter to everyone in the world.

 

Q6: How will your team prepare for the PUCC?
A6: We have just participated the Parallel Application Challenge 2014 (PAC2014) held in HPC China14, in Nov.6-8, Guangzhou, China, which is organized by China Computer Federation Technical Committee of HPC and Intel. This competition required teams to optimize a parallel application provided by the organizers on the Intel Xeon and Xeon Phi computing platforms. Through the competition, we deeply understand much of Intel’s innovation in manycore and multicore technology.

 

For more on the Intel Parallel Universe Computing Challenge, visit the home page.

Read more >

Don’t Miss HPC Luminary, Jack Dongarra, at Intel’s SC14 Booth in New Orleans

If you plan to attend SC14 in New Orleans, you’re in for a treat.

 

Jack Dongarra, one of the HPC community’s iconic speakers, will be holding forth at the Intel booth (#1315), Monday, November 17 at 7:10 pm, on one of his current favorite topics – MAGMA (Matrix Algebra on GPU and Multicore Architectures). His talk is titled, “The MAGMA Project: Numerical Linear Algebra for Accelerators.”

 

To say Jack has the chops to speak on the subject is a major understatement. In addition to being an enthusiastic, informative and entertaining speaker, Jack is a Distinguished Professor of Computer Science at the University of Tennessee (UTK) as well as the director of UTK’s Innovative Computing Laboratory (ICL). And that’s just for openers.

 

You can check out his extensive affiliations and contributions to the design of open source software packages and systems on his LinkedIn page. He has worked on everything from LINPACK, BLAS and MPI to the latest projects that he and his team are developing at ICL, which include PaRSEC, PLASMA, and, of course, MAGMA.

 

They all fall within his rather broad specialty area which includes numerical algorithms in linear algebra, parallel computing, the use of advanced computer architecture, programming methodology, and tools for parallel computers.

 

Jack was part of the team that developed LAPACK and ScaLAPACK. This same team is responsible for designing and implementing the collection of next generation linear algebra libraries that make up MAGMA.

Designed for heterogeneous coprocessor and GPU-based architectures, MAGMA re-implements the functions of LAPACK and the BLAS optimized for the hybrid platform. This allows computational scientists to effortlessly port any software components that rely on linear algebra.

 

One of the main payoffs for using MAGMA is that it allows you to enable applications that fully exploit the power of heterogeneous systems composed of multicore and many-core CPUs and coprocessors. MAGMA allows you to leverage today’s advanced computational capabilities to realize the fastest possible time to an accurate solution within given energy constraints.


A New Intel Parallel Computing Center

In September of this year, UTK’s Innovative Computing Laboratory (ICL) became the newest Intel Parallel Computing Center (IPCC). The objective of the ICL IPCC is the development and optimization of numerical linear algebra libraries and technologies for applications, while tackling current challenges in heterogeneous Intel Xeon Phi coprocessor-based high performance computing.

 

In collaboration with Intel’s MKL team, the IPCC at ICL will modernize the popular LAPACK and ScaLAPACK libraries to run efficiently on current and future manycore architectures, and will disseminate the developments through the open source MAGMA MIC library.

 

Beating the Bottlenecks

This is good news for the members of the HPC community who will shortly be gathering in force in New Orleans. As Dongarra will undoubtedly point out during his presentation, by combining the strengths of different architectures, MAGMA overcomes bottlenecks associated with just multicore or just GPUs, to significantly outperform corresponding packages for any of these homogeneous components taken separately.

 

By combining the strengths of the different architectures, MAGMA significantly outperforms corresponding packages for any of these homogenous components taken separately. MAGMA’s one-sided factorization outperforms state-of-the-art CPU libraries on high-end multi-socket, multicore nodes – for example, using up to 48 modern cores. The benefits for two-sided factorizations (the bases for eigenvalue problem and SVD solvers) are even greater: performance can exceed 10X the performance of systems with 48 modern CPU cores.

 

Last September, ICL also announced that MAGMA MIC 1.2 is now available. This release provides implementations for MAGMA’s one-sided (LU, QR, and Cholesky) and two-sided (Hessenberg, bi- and tridiagonal reductions) dense matrix factorizations, as well as linear and eigenproblem solver for Intel Xeon Phi coprocessors.

 

MAGMA has been developed in C for multi/manycore systems enhanced with coprocessors. Within a CPU node, MAGMA uses pthreads and MPI for inter-nodal communications. Included is the development of a MAGMA port in OpenCL, as well as a pragma-based port suitable for Intel MIC-based architectures.

 

Funding for the development is being provided by DOE and NSF, as well as by industry including Intel, NVIDIA, AMD, MathWorks and Microsoft Research.

 

We can’t second guess what Jack will have to say, but based on the overview on the MAGMA web site, certain themes are likely to surface.

 

For example, it’s evident that the design of future microprocessors and large HPC systems will be heterogeneous and hybrid in nature. They will rely on the integration of many-core CPU technology and special purpose hardware and accelerators and coprocessors like the Intel Xeon Phi coprocessor. And we’re not just talking about high end machines here – everything from laptops to supercomputers and massive clusters will be composed of a composite of heterogeneous components.

 

Jack and his crew at the University of Tennessee are playing a major role in making that happen.

 

So stop by the Intel booth (#1315) at 7:10 pm on Monday, November 17, and let Jack Dongarra weave you a tale of how HPC is moving into overdrive with the help of advanced aids like MAGMA.

 

You’ll be hearing HPC history in the making.


For a schedule of all Intel Collaboration Hub and Theater Presentations at SC14, visit this blog.

Read more >

Blueprint: SDN’s Impact on Data Center Power/Cooling Costs

This article originally appeared on Converge Digests Monday, October 13, 2014

 

intel-cloud-graphic.PNGThe growing interest in software-defined networking (SDN) is understandable. Compared to traditional static networking approaches, the inherent flexibility of SDN compliments highly virtualized systems and environments that can expand or contract in an efficient business oriented way. That said, flexibility is not the main driver behind SDN adoption. Early adopters and industry watchers cite cost as a primary motivation.

 

 

 

SDN certainly offers great potential for simplifying network configuration and management, and raising the overall level of automation. However, SDN will also introduce profound changes to the data center. Reconfiguring networks on the fly introduces fluid conditions within the data center.

 

 

How will the more dynamic infrastructures impact critical data center resources – power and cooling?

 

In the past, 20 to 40 percent of data center resources were typically idle at any given time and yet still drawing power and dissipating heat. As energy costs have risen over the years, data centers have had to pay more attention to this waste and look for ways to keep the utility bills within budget. For example, many data centers have bumped up the thermostat to save on cooling costs.

 

 

These types of easy fixes, however, quickly fall short in the data centers associated with highly dynamic infrastructures. As network configurations change, so do the workloads on the servers, and network optimization must therefore take into consideration the data center impact.

 

 

Modern energy management solutions equip data center managers to solve this problem. They make it possible to see the big picture for energy use in the data center, even in environments that are continuously changing.  Holistic in nature, the best-in-class solutions automate the real-time gathering of power levels throughout the data center as well as server inlet temperatures for fine-grained visibility of both energy and temperature. This information is provided by today’s data center equipment, and the energy management solutions make it possible to turn this information into cost-effective management practices.

 

 

The energy management solutions can also give IT intuitive, graphical views of both real-time and historical data. The visual maps make it easy to identify and understand the thermal zones and energy usage patterns for a row or group of racks within one or multiple data center sites.

 

 

Collecting and analyzing this information makes it possible to evolve very proactive practices for data center and infrastructure management. For example, hot spots can be identified early, before they damage equipment or disrupt services. Logged data can be used to optimize rack configurations and server provisioning in response to network changes or for capacity planning.

 

 

Some of the same solutions that automate monitoring can also introduce control features. Server power capping can be introduced to ensure that any workload shifts do not result in harmful power spikes. Power thresholds make it possible to identify and adjust conditions to extend the life of the infrastructure.

 

 

To control server performance and quality of service, advanced energy management solutions also make it possible to balance power and server processor operating frequencies. The combination of power capping and frequency adjustments gives data center managers the ability to intelligently control and automate the allocation of server assets within a dynamic environment.

 

 

Early deployments are validating the potential for SDN, but data center managers should take time to consider the indirect and direct impacts of this or any disruptive technology so that expectations can be set accordingly. SDN is just one trend that puts more pressure on IT to be able to do more with less.

 

 

Management expects to see costs go down; users expect to see 100% uptime for the services they need to do their jobs. More than ever, IT needs the right tools to oversee the resources they are being asked to deploy and configure more rapidly. They need to know the impacts of any change on the resource allocations within the data center.

 

 

IT teams planning for SDN must also consider the increasing regulations and availability restrictions relating to energy in various locations and regions. Some utility companies are already unable to meet the service levels required by some data centers, regardless of price. Over-provisioning can no longer be considered a practical safety net for new deployments.

 

 

Regular evaluations of the energy situation in the data center should be a standard practice for technology planning. Holistic energy management solutions give data center managers many affordable tools for those efforts. Today’s challenge is to accurately assess technology trends before any pilot testing begins, and leverage an energy management solution that can minimize the pain points of any new technology project such as SDN.

Read more >

Bringing Conflict-Free Technology to the Enterprise



In January 2014, Intel accomplished its goal to manufacture microprocessors that are DRC conflict free for tantalum, tin, tungsten, and gold.

 

The journey towards reimagining the supply chain is long and arduous; it’s a large-scale, long-term commitment that demands precise strategy. For us, it was an extensive five-year plan of collecting and analyzing data, building an overarching business goal, educating and empowering supply chain partners, and implementing changes guaranteed to add business value for years to come. But we committed ourselves to these efforts because of global impact and responsibility.  As a result, the rewards have outweighed the work by leaps and bounds.

 

Cutting Ties with Conflict Minerals

 

The Democratic Republic of Congo (DRC) is the epicenter of one of the most brutal wars of our time; since 1998, 5.4 million lives have been lost to the ongoing conflict, 50 percent of which were five-years old or younger. The economy of the DRC relies heavily on the mining sector, while the rest of the world relies heavily on the DRC’s diamonds, cobalt ore, and copper. The stark reality is that the war in the Eastern Congo has been fueled by the smuggling of coltan and cassiterite (ores of tantalum and tin, respectively). Meaning most of the electronic devices we interact with on a daily basis are likely powered by conflict minerals.

 

One of the main reasons most are dissuaded from pursuing an initiative of this scope is that the supply chain represents one of the most decentralized units in the business. Demanding accountability from a complex system is a sizeable endeavor. Intel represents one of the first enterprise tech companies to pursue conflict-free materials, but the movement is starting to gain traction in the greater tech community as customers demand more corporate transparency.

 

Getting the Enterprise Behind Fair Tech

 

For Bas van Abel, CEO of Fairphone, there’s already a sizeable consumer demand for fair technology, but there remains a distinct need to prove that a market for fair technology exists. Fairphone is a smartphone featuring an open design built with conflict-free minerals. The company also boasts fair wages and labor practices for the supply chain workforce. When Abel crowd-funded the first prototype, his goal was to pre-sell 5,000 phones; within three weeks, he had sold 10,000. It’s only a matter of time before the awareness gains foothold and the general public starts demanding conflict-free minerals.

 

Screen Shot 2014-11-07 at 10.28.24 AM.png

We chose to bring the conflict-free initiative to our supply chain because funding armed groups in the DRC was no longer an option. Our hope is that other enterprises will follow suit in analyzing their own supply chains. If you want to learn more about how we embraced innovation by examining our own corporate responsibility and redefining how we build our products, you can read the full brief here.

 

To continue the conversation, please follow us at @IntelITCenter or use #ITCenter.

Read more >

SC14: Life Sciences Research Not Just for Workstations Anymore

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Ari E. Berman, Ph.D., Director of Government Services and Principal Investigator at BioTeam, Inc. Ari will be sharing his thoughts on high performance infrastructure and high speed data transfer during SC14 at the Intel booth (#1315) on Wednesday, Nov. 19, at 2 p.m. in the Intel Community Hub and at 3 p.m. in the Intel Theater.


There is a ton of hype these days about Big Data, both in what the term actually means, and what the implications are for reaching the point of discovery in all that data.

 

The biggest issue right now is the computational infrastructure needed to get to that mythical Big Data discovery place everyone talks about. Personally, I hate the term Big Data. The term “big” is very subjective and in the eye of the beholder. It might mean 3PB (petabytes) of data to one person, or 10GB (gigabytes) to someone else. ariheadshot52014-sized.jpg

 

From my perspective, the thing that everyone is really talking about with Big Data is the ability to take the sum total of data that’s out there for any particular subject, pool it together, and perform a meta-analysis on it to more accurately create a model that can lead to some cool discovery that could change the way we understand some topic. Those meta-analyses are truly difficult and, when you’re talking about petascale data, require serious amounts of computational infrastructure that is tuned and optimized (also known as converged) for your data workflows. Without properly converged infrastructure, most people will spend all of their time just figuring out how to store and process the data, without ever reaching any conclusions.

 

Which brings us to life sciences. Until recently, life sciences and biomedical research could really be done using Excel and simple computational algorithms. Laboratory instrumentation really didn’t create that much data at a time, and it could be managed with simple, desktop-class computers and everyday computational methods. Sure, the occasional group was able to create enough data that required some mathematical modeling or advanced statistical analysis or even some HPC, and molecular simulations have always required a lot of computational power. But, in the last decade or so, the pace of advancement of laboratory equipment has left large swath of overwhelmed biomedical research scientists in the wake of the amount of data being produced.

 

The decreased cost and increased speed of laboratory equipment, such as next-generation sequencers (NGS) and high-throughput high-resolution imaging systems, has forced researchers to become very computationally savvy very quickly. It now takes rather sophisticated HPC resources, parallel storage systems, and ultra-high speed networks to process the analytics workflows in life sciences. And, to complicate matters, these newer laboratory techniques are paving the way towards the realization of personalized medicine, which carries the same computational burden combined with the tight and highly subjective federal restrictions surrounding the privacy of personal health information (PHI).  Overcoming these challenges has been difficult, but very innovative organizations have begun to do just that.

 

I thought it might be useful to very briefly discuss the three major trends we see having a positive effect on life sciences research:

 

1. Science DMZs: There is a rather new movement towards the implementation of specialized research-only networks that prioritize fast and efficient data flow over security (while still maintaining security), also known as the Science DMZ model (http://fasterdata.es.net). These implementations are making it easier for scientists to get around tight enterprise networking restrictions without blowing the security policies of their organizations so that scientists can move their data effectively without ******* off their compliance officers.


2. Hybrid Compute/Storage Models: There is a huge push to move towards cloud-based infrastructure, but organizations are realizing that too much persistent cloud infrastructure can be more costly in the long term than local compute. The answer is the implementation of small local compute infrastructures to handle the really hard problems and the persistent services, hybridized with public cloud infrastructures that are orchestrated to be automatically brought up when needed, and torn down when not needed; all managed by a software layer that sits in front of the backend systems. This model looks promising as the most cost-effective and flexible method that balances local hardware life-cycle issues with support personnel, as well as the dynamic needs of scientists.


3. Commodity HPC/Storage: The biggest trend in life sciences research is the push towards the use of low-cost, commodity, white box infrastructures for research needs. Life sciences has not reached the sophistication level that requires true capability supercomputing (for the most part), thus, well-engineered capacity systems built from white-box vendors provide very effective computational and storage platforms for scientists to use for their research. This approach carries a higher support burden for the organization because many of the systems don’t come pre-built or supported overall, and thus require in-house expertise that can be hard to find and expensive to retain. But, the cost balance of the support vs. the lifecycle management is worth it to most organizations.

 

Biomedical scientific research is the latest in the string of scientific disciplines that require very creative solutions to their data generation problems. We are at the stage now where most researchers spend a lot of their time just trying to figure out what to do with their data in the first place, rather than getting answers. However, I feel that the field is at an inflection point where discovery will start pouring out as the availability of very powerful commodity systems and reference architectures come to bear on the market. The key for life sciences HPC is the balance between effectiveness and affordability due to a significant lack of funding in the space right now, which is likely to get worse before it gets better. But, scientists are resourceful and persistent; they will usually find a way to discover because they are driven to improve the quality of life for humankind and to make personalized medicine a reality in the 21st century.

 

What questions about HPC do you have?

Read more >

Empowering Field Workers Through Mobility

The unique — often adverse — working conditions facing utility field workers require unique mobility solutions. Not only do workers in these roles spend the majority of their time on the road, their work often takes them to places where the weather and terrain is less than hospitable. Despite all of the challenges facing this large mobile workforce, new tablets and other mobile devices are increasing productivity and reducing downtime for workers.

 

Field workers need a device that supports them whether they’re on the road or in the office. A recent RCR Wireless guest blogger summed up the needs of utility field workers by comparing them to the “front lines” of an organization:

 

Q4-SSG-Image-2-1 (1).png

Field workers are at the front lines of customer service … and therefore need to be empowered to better serve customers. They require mobile applications that offer easier access to information that resides in corporate data centers.

 

Previously, this “easy access” to data centers was limited to service center offices and some mobile applications and devices. Now, however, advances in tablet technology enable workers to take a mobile office with them everywhere they go.


Tough Tablets for Tough Jobs


With numerous tablets and mobile PCs on the market, it’s difficult to determine which mobile solution provides the best experience for the unique working conditions of field workers. In order to move through their work, these users need a device that combines durability, connectivity, security, and speed.

 

Principled Technologies recently reviewed an Apple iPad Air, Samsung Galaxy Tab Pro 12.2, and a Motion Computing R12 to determine which tablet yields the most benefits for utility field workers. After comparing performance among the devices with regards to common scenarios field workers face on a daily basis, one tablet emerged as a clear favorite for this workforce.

 

While the iPad and Galaxy feature thin profiles and sleek frames, the Intel-powered Motion Computing R12 received an MIL-STD-801G impact resistance rating from the U.S. Military as well as international accreditation (IP-54 rating) for dust and water resistance. The device also hit the mark with its biometric security features and hot-swappable 8-hour battery.

 

Communication between utility workers and dispatching offices is often the key to a successful work day. Among the three tablets, the Motion Computing R12 was the only device able to handle a Skype call and open and edit an Excel document simultaneously. This kind of multi-tasking ability works seamlessly on this tablet because it runs Microsoft Windows 8.1 natively on a fast Intel processor and also boasts 8 GB of RAM (compared to 1 GB in the iPad and 3 GB in the Galaxy).

 

At the end of the day, having the right device can lead to more work orders completed and better working conditions for field workers. Empowering field workers with the right tools can remove many of the technical hurdles that they face and lead to increases in productivity and reduced inefficiencies.

 

To learn more about the Motion Computing R12, click here.

Read more >

Boosting Big Data Workflows for Big Results

When working with small data, it is relatively easy to manipulate, wrangle, and cope with all of the different steps in the data access, data processing, data mining, and data science workflow. All of the various steps become familiar and reproducible, often manually. These steps (and their sequences) are also relatively simple to adjust and extend. However, as the data collection becomes increasingly massive, distributed, and diverse, while also demanding more real-time response and action, the challenges become enormous: the challenge to extend, modify, reproduce, document, or do anything new within your data workflow. This is a serious problem, because data-driven workflows are the life and existence of big data professionals everywhere: data scientists, data analysts, and data engineers.


Workflows for Big Data Professionals

Data professionals perform all types of data functions in their workflow processes: archive, discover, access, visualize, mine, manipulate, fuse, integrate, transform, feed models, learn models, validate models, deploy models, etc. It is a dizzying day’s work. We start manually in our workflow development, identifying what needs to happen at each stage of the process, what data are needed, when they are needed, where data needs to be staged, what are the inputs and outputs, and more.  If we are really good, we can improve our efficiency in performing these workflows manually, but not substantially. A better path to success is to employ a workflow platform that is scalable (to larger data), extensible (to more tasks), more efficient (shorter time-to-solution), more effective (better solutions), adaptable (to different user skill levels and to different business requirements), comprehensive (providing a wide scope of functionality), and automated (to break the time barrier of manual workflow activities). The “Big Data Integration” graphic below from http://www.apervi.com/ identifies several of the business needs, data functions, and challenge areas associated with these big data workflow activities.

big_data_inforgaphic_Edit.jpg


All-in-one Data Workflow Platform

A workflow platform that performs a few of those data functions for a specific application is nothing new – you can find solutions that deliver workflows for business intelligence reporting, or analytic processing, or real-time monitoring, or exploratory data analysis, or for predictive analytic deployments. However, when you find a unified big data orchestration platform that can do all of those things – that brings all the rivers of data into one confluence (like the confluence of the Allegheny and Monongahela Rivers that merge to form the Ohio River in the eastern United States) – then you have a powerful enterprise-level big data orchestration capability for numerous applications, users, requirements, and data functions.  The good news is that there is a company that offers such a platform: Apervi is that company, and Conflux is that confluence.

 

Apervi is a big data integration development company. From Apervi’s comprehensive collection of product documentation, you learn about all of the features and benefits of their Conflux product.  For example, the system has several components: Designer, Monitor, Dashboard, Explorer, Scheduler, and Connector Pack. We highlight and describe each of these various components below:

 

    • The Conflux Designer is an intuitive HTML5 user interface for designing, building, and deploying workflows, using simple drag-and-drop interactivity. Workflows can be shared with other users across the business.
    • The Conflux Monitor keeps track of job progress, with key statistics available in real-time, from any device, any browser, anywhere.  Drilldown capabilities empower exploratory analysis of any job, enabling rapid response and troubleshooting.
    • The Conflux Dashboard provides rich visibility into KPIs and job stats, on a fully customizable screen that includes a variety user-configurable alert and notification widgets. The extensible dashboard framework can also integrate custom dashboard widgets.
    • The Conflux Explorer puts search, discovery, and navigation powers into the hands of the data scientist, enabling that functionality across multiple data sources simultaneously. A mapping editor allows the user to locate and extract the relevant, valuable, and interesting information nuggets within targeted data streams.
    • The Conflux Scheduler is a flexible, intuitive scheduling and execution tool, which is extensible and can be integrated with third party products.
    • The Conflux Connector Pact is perhaps the single most important piece of the workflow puzzle: it efficiently integrates and connects data that are streaming from many disparate heterogeneous sources. Apervi provides several prebuilt connectors for specific industry segments, such as Telecom, Healthcare, and Electronic Data Interchange (EDI).

    AperviConfluxDiagram.png


    Big Benefits from a Seamless Confluence of Data Workflow Functions

    For organizations who are trying to cope with big data and to manage complex big data workflows, a multi-functional user-oriented workflow platform like Apervi’s Conflux can be leveraged to boost results in several ways. These benefits include:

    • Reduce operational costs
    • Drive faster results, from data discovery to information-based decision-making
    • Accelerate development of data-based products across verticals and business functions
    • Manage integration effectively through monitoring and intelligent insights.

     

     

     

     

     

     

     



    For more information, Apervi provides detailed white papers, datasheets, product documentation, case studies, and infographics on their website at http://www.apervi.com/.

     

     

    Dr. Kirk Borne is a Data Scientist and Professor of Astrophysics and Computational Science in the George Mason University School of Physics, Astronomy, and Computational Sciences. He received his B.S. degree in physics from LSU and his Ph.D. in astronomy from the California Institute of Technology. He has been at Mason since 2003, where he teaches graduate and undergraduate courses in Data Science and advises many doctoral dissertation students in Data Science research projects. He focuses on achieving big discoveries from big data and promotes the use of data-centric experiences with big data in the STEM education pipeline at all levels. He promotes the “Borne Ultimatum” — data literacy for all!

     

    Connect with Kirk on LinkedIn.

    Follow Kirk on Twitter at @KirkDBorne.

    Read more of his blogs at http://rocketdatascience.org/

    Read more >

    SC14: HPC and Big Data in Healthcare and Life Sciences

    What better place to talk life sciences big data than the Big Easy? As temperatures are cooling down this month, things are heating up in New Orleans where Intel is hosting talks on life sciences and HPC next week at SC14. It’s all happening in the Intel Community Hub, Booth #1315, so swing on by and hear about these topics from industry thought leaders:

     

    Think big: delve deeper into the world’s biggest bioinformatics platform. Join us for a talk on the CLC bio enterprises platform, and learn how it integrates desktop interfaces with high performance cluster resources. We’ll also discuss hardware and explore the scalability requirements needed to keep pace with the Illumina HiSeq X-10 sequencer platform, and with a production cluster environment based on Intel® Xeon® processor E5-2600 V3. When: Nov. 18, 3-4 p.m.

     

    Special Guests:

    Lasse Lorenzen, Head of Platform & Infrastructure, Qiagen Bioinformatics;

    Shawn Prince, Field Application Scientist, Qiagen Bioinformatics;

    Mikael Flensborg, Director Global Partner Relations, Qiagen Bioinformatics

     

    Find out how HPC is pumping new life into the Living Heart Project. Simulating diseased states, and personalizing medical treatments, requires significant computing power. Join us for the latest updates on the Living Heart Project, and learn how creating realistic multiphysics models of human hearts can lead to groundbreaking approaches to both preventing and treating cardiovascular disease. When: Nov. 19, 1-2 p.m.

     

    Special Guest: Karl D’Souza, Business Development, SIMULIA Asia-Pacific

     

    Get in sync with scientific research data sharing and interoperability. In 1989, the quest for global scientific collaboration helped lead to the birth of what we now call the Internet. In this talk, Aspera and BioTeam will discuss where we are today with new advances in global scientific data collaboration. Join them for an open discussion exploring the newest offerings for high-speed data transfer across scientific research environments. When: Nov. 19, 2-3 p.m.

     

    Special Guests:

    Ari E. Berman, PhD, Director of Government Services and Principal Investigator, BioTeam;

    Aaron Gardner, Senior Scientific Consultant, BioTeam;

    Charles Shiflett, Software Engineer, Aspera

     

    Put cancer research into warp speed with new informatics technology. Take a peak under the hood of the world’s first comprehensive, user-friendly, and customizable cancer-focused informatics solution. The team from Qiagen Bioinformatics will lead a discussion on CLC Cancer Research Workbench, a new offering for the CLC Bio Cancer Genomics Research Platform. When: Nov. 19, 3-4 p.m.

     

    Special Guests:

    Shawn Prince, Field Application Scientist, Qiagen Bioinformatics;

    Mikael Flensborg, Director Global Partner Relations, Qiagen Bioinformatics

     

    You can see more Intel activities planned for SC14 here.

     

    What are you looking forward to seeing at SC14 next week?

    Read more >