Recent Blog Posts

Transform IT Episode 5 Recap: The Intersection of Technology and Humanity

What does it mean to be a futurist?

 

In episode 5 of the Transform IT show I sat down with James Jorasch and Rita J. King, powerhouse couple, futurists and founders of Science House. We talked about their unique and distinct journeys that led them both to this point at which, in their view, technology and humanity are intersecting.

Behind the Scenes - Transform IT.JPG

We talked about the perspectives that cause then to be viewed as “futurists” and what we can each do to develop that viewpoint. Most importantly, we talked about how important it is to recognize that we are in control. That we can shape everything: our culture, our future and our destiny.

 

They challenged us to move ourselves away from those things that are known and comfortable and to be willing to immerse ourselves in the unknown. They challenged us to employ diligent practice to develop our skills, but to not stop there – to then expand our horizons into seemingly disconnected areas. And then to simply let things simmer and percolate.

 

It was a fascinating look into a different way of seeing the world around us.

 

Perhaps the greatest challenge came from Rita as she encouraged each of us to “have an adventure”. When was the last time in our busy, corporate lives that we thought of ourselves as on an adventure?

 

Probably never.

 

Yet her challenge was a way of reminding us that the only way to prepare for an uncertain and rapidly evolving future is to put ourselves in a state of adventure. We need to be open to new ideas and new perspectives. We need to be willing to challenge the status quo and be open to connections that might not seem to make any sense on the surface.

 

It can be a tall order for most IT professionals. We’re more comfortable with things that we can see and touch. But I believe that we need to embrace this kind of perspective if we are going to remain relevant as our world transforms around us.

 

And I believe that’s what it means to be a futurist.

 

A futurist is not someone that knows what the future holds. But rather, a futurist is simply someone who is in a constant state of exploration about the future and who is open to wherever that future may lead.

 

So the question for you is how will you rise to this challenge? How will you begin your adventure and begin thinking a bit more like a futurist tomorrow?

 

Share your “morning action” with us in the comments section below. And you can also join in the conversation any time on Twitter using the hashtags #ITChat and #TransformIT.

 

And, if you missed episode 5 of the Transform IT show, you can watch it here. Also, make sure to tune in on December 2 when I’ll be talking to Brian Vellmure on becoming an IT outsider.

 

If you haven’t had a chance to read my book, “The Quantum Age of IT”, you can download the first chapter for free here.

Read more >

On the Ground at SC14: The Intel HPC Developer Conference

SC14 is officially under way, however the Intel team got in on the action a bit early and I’m not just talking about the set up for our massive booth (1315 – stop by to see the collaboration hub, theater talks and end-user demos). Brent Gorda, GM of the High Performance Data Division, gave a presentation at HP-Cast on Friday on the Intel Lustre roadmap. A number of other Intel staffers gave presentations ranging from big data to fabrics to exascale computing, as well as a session the future of the Intel technical computing portfolio. The Intel team also delivered an all-day workshop on OpenMP on the opening day of SC14.

 

On Sunday, Intel brought together more than 350 members of the community for an HPC Developer Conference at the Marriott Hotel to discuss key topics including high fidelity visualization, parallel programming techniques/software development tools, hardware and system architecture, and Intel Xeon Phi coprocessor programming.

 

The HPD Developer Conference kicked off with a keynote from Intel’s Bill Magro discussing the evolution of HPC – helping users gain insight and accelerate innovation – and where he thinks the industry is headed:

 

 

Beyond what we think of as traditional research supercomputing (weather modeling, genome research, etc.) there is a world of vertical enterprise segments that can also see massive benefits from HPC. Bill used the manufacturing industry as an example – SMBs of 500 or less employees could see huge benefits from digital manufacturing with HPC but need to get beyond cost (hardware/apps/staff training) and perceived risk (no physical testing is a scary prospect). This might be a perfect use case for HPC in the cloud – pay as you go would lessen the barriers to entry, and as the use case is proved and need grows, users can move to a more traditional HPC system.

 

Another key theme for the DevCon was code modernization. To truly take advantage of coprocessors, apps need to be parallelized. Intel is working with the industry via more than 40 Intel Parallel Computing Centers around the world to increase parallelism and scalability through optimizations that leverage cores, caches, threads, and vector capabilities of microprocessors and coprocessors. The IPCCs are working in a number of areas to optimize code for Xeon Phi including aerospace, climate/weather modeling, life sciences, molecular dynamics and manufacturing. Intel also recently launched a catalog of more than 100 applications and solutions available for the Intel Xeon Phi coprocessor.

 

Over in the Programming for Xeon Phi Coprocessor track, Professor Hiroshi Nakashima from Kyoto University gave a presentation on programming for Xeon Phi on a Cray XC30 system. The university has 5 supercomputers (Camphor, Magnolia, Camellia, Laurel, and Cinnamon), with this talk covering programming for Camellia. The main challenges faced by Kyoto University were programming for: Inter-node = tough, intra-node = tougher, and intra-core = toughest (he described having to rewrite innermost kernels and redesign data structure for intra-core programming). He concluded that simple porting or large scale multi-threading may not be sufficient for good performance and SIMD-aware kernel recoding/redesign may be necessary.

 

Professor Hiroshi Nakashima’s slide on the Camellia supercomputer


Which brings me to possibly the hottest topic of the Developer Conference: the next Intel Xeon Phi processor (codename Knights Landing). Herbert Cornelius and Avinesh Sodani took the stage to give a few more details on the highly-anticipated processor arriving next year:

  • It will be available as a self-boot processor alleviating PCI Express bottlenecks, a self-boot processor + integrated fabric (Intel Omni-Path Architecture), or as an add-in card
  • Binary compatible with Intel Xeon processors (runs all legacy software, no recompiling)
  • The new core is Silvermont microarchitecture-based with many updates for HPC (offering 3x higher ST performance over current generation Intel Xeon Phi coprocessors)
  • Offers improved vector density (3+ teraflops (DP) peak per chip)
  • AVX 512 ISA (new 512-bit vector ISA with Masks)
  • Scatter/Gather engine (enabling hardware support for gather/scatter)
  • New memory technology MCDRAM + DDR (large high bandwidth memory – MCDRAM and huge bulk memory – DDR)
  • New on-die interconnect – MESH (high BW connection between cores and memory)

 

Next Intel Xeon Phi Processor (codename Knights Landing)

 

Another big priority for Intel is high fidelity visualization and measuring/modeling as an increasingly complex phenomenon. Jim Jeffers led a track on the subject and gave an overview presentation covering a couple of trends (increasing data size – no surprise there, and increasing shading complexity). He then touched on Intel’s high fidelity visualization solutions including software (Embree, the foundation for ray tracing in use by DreamWorks, Pixar, Autodesk, etc.) and efficient use of compute cluster nodes. Jim wrapped up by discussing an array of technical computing rendering tools developed by Intel and partners, which are all working to enable higher fidelity, higher capability, and better performance to move visualization work to the next level.

 

Jim Jeffers’s Visualization Tools Roadmap


These are just a few of the more than 20 sessions and topics (Lustre! Fabrics! Intel Math Kernel Library! Intel Xeon processor E5 v3!) at the HPC Developer Conference. The team is planning to post presentations to the Website in the next week, so check back for conference PDFs. If you attended the conference, please fill out your email survey – we want to hear from you on what worked and what didn’t. And if we missed you this year, drop us an email (contact info is at the bottom of the conference homepage) and we’ll make sure you get an invite in the future.

Read more >

SC14 Podcast: How HPC Impacts Alzheimer’s Research

 

With SC14 kicking off today, it’s timely to look at how high performance computing (HPC) is impacting today’s valuable life sciences research. In the above podcast, Dr. Rudy Tanzi, the Joseph P. and Rose F. Kennedy Professor of Neurology at Harvard Medical School and the Director, Genetics and Aging Research Unit at the MassGeneral Institute for Neurodegenerative Disease, talks about his pioneering research in Alzheimer’s disease and how HPC is critical to the path forward.

 

Listen to the conversation and hear how Dr. Tanzi says HPC still has a ways to go to provide the compute power that life sciences researchers need. What do you think?

 

What questions about HPC do you have? 

 

If you’re at SC14, remember to come by the Intel booth (#1315) for life sciences presentations in the Intel Community Hub and Intel Theater. See the schedules here.

Read more >

SGI has built a revolutionary system for NVMe storage scaling at SC14!

Intel launched its Intel® Solid-State Drive Data Center Family for PCIe  based on the NVMe specification in June, 2014. But in the world of amazing possibilities with these products, we still want more. Why? A half million IOPS and three gigs per second out of a single card is not enough for Super Compute workloads, right? Not always, and not for every application. Here are a couple of reasons why we need more performance and how that’s possible.  We really want to scale both the performance and the density.

 

Consistent performance is the answer to the question. Intel SSDs help to deliver consistent performance across different workloads, including mixed ones, which is worst-case scenario for a drive. That’s applicable to a wide range of Data Center products no matter SATA or PCIe. Performance scaling of SATA SSDs is limited by HBA or RAID controller performance, SAS topology and related interface latency. You can scale it linearly in a limited range until the threshold is reached. After that you realize nothing but increased access latency for the RAID configuration. The single Intel PCIe SSD (our P3700) product line can outperform at least 6 SATA SSDs (S3700) on a range of 4K random workloads while maintaining a lower latency than a single SATA SSD. (See the diagram below)

01.png

1 Source: Intel. Measurements made on Hanlan Creek (Intel S5520HC) system with two Intelâ Xeon X5560@ 2.93GHz and 12GB (per CPU) Mem running RHEL6.4 O/S, Intel S3700 SATA Gen3 SSDs are connected to LSI* HBA 9211, NVMe* SSD is under development, data collected by FIO* tool

 

But then how does the performance scale with multiple drives within one system? Given the benefit of latency reduction due to transition to PCIe interface, NVMe protocol and high QoS of the P3x00 product line, it’s hard to predict how far we can take this.

 

Obviously, we have a limited amount of PCIe lanes per CPU, which depends on the generation and CPU architecture as well as system, thermal and power architecture. Each P3700 SSD takes PCIe Gen3 x4. In order to evaluate the scaling of NVMe SSDs we would like to avoid using PCIe switches and multiplexers. How about a big multi-socket scale-up system based on 32 Xeon E7 CPUs as a test platform? Looks very promising for the investigation of the NVMe scaling.

 

SGI has presented an interesting All-Flash concept at SC14. It includes a 32 socket system with 64 Intel® Solid-State Drive DC P3700 800GB SSDs, running a single SLES 11 SP3 Linux OS.

http://www.intel.com/content/www/us/en/solid-state-drives/intel-ssd-dc-family-for-pcie.html

 

That offers a great opportunity to see what happens with the performance scaling inside this massive single image system. Turns out, it’s a true record of 30M IOPS on 4K RR workload! Let’s have a look at the scaling progression here:

03.png

The data above was measured at SGI Labs on concept platform based on 32 E7 Xeon CPUs.

 

This chart represents IOPS and GB/s to the number of SSDs scaling on 4k random read workloads (blue line) and 128K random read workload (red line), from the testing done at SGI’s labs. Each SSD on the add-in-card form factor of PCIe card works independently. It’s like its own controller. We’re not worried about an additional software RAID overhead, and are only interested to see the raw device performance. Dotted lines represent linear approximation while the solid lines connect the dots from experimental tests.

 

Hard to believe? Come to SGI’s booth 915 and talk to them about it.

Read more >

Better Patient Care Starts With Better Technology

Home healthcare practitioners need efficient, reliable access to patient information no matter where they go, so they need hardware solutions that meet their unique needs. Accessing critical patient information, patient file management, seamless multitasking and locating a patient’s residence, are daily tasks for mobile healthcare professionals. Mobile practitioners don’t have access to the same resources they would if they were working in a hospital, so the tools they use are that much more critical to accomplishing their workload. Fortunately, advances in mobile computing have created opportunities to bridge that gap.

 

An Evolved Tablet For Healthcare Providers Healthcare image.png


As tablets have evolved, they’ve become viable replacements for clunky laptops. Innovation in the mobile device industry has transformed these devices from media consumption platforms and calendar assistants into robust workhorses that run full-fledged operating systems. However, when it comes to meeting the needs of home healthcare providers, not all tablets are created equal.

                    

A recent Prowess Consulting comparison looked at two popular devices with regards to tasks commonly performed by home healthcare workers. The study compared an Apple® iPad Air™ and a Microsoft® Surface™ Pro 3 to determine which device offers a better experience for home healthcare providers, and ultimately, their patients.

 

Multitasking, Done Right

 

One of the biggest advantages to the Surface™ Pro 3 is its ability to let users multitask. For example, a healthcare worker can simultaneously load and display test results, charts, and prescription history via the device’s split-screen capabilities. A user trying to perform the same tasks on the iPad would find themselves running into the device’s limitations; there are no split-screen multitasking options on the iPad Air™.

 

The Surface™ Pro 3’s powerful multitasking abilities combined with the ability to natively run Microsoft Office gives home healthcare providers the ability to focus more time on patient care and less time on administrative tasks. Better user experience, workflow efficiency, file access speed, and split-screen multitasking all point to the Microsoft® Surface™ Pro 3 as the better platform for home healthcare providers.

 

For a full rundown of the Surface™ Pro 3’s benefits to home healthcare workers, click here.

Read more >

Talk Innovation and Pathfinding with Intel at SC14

Karl Solchenbach is the Director of IPAG Europe, including Intel’s Exascale Labs in EMEA

 

This year the annual HPC and supercomputer conference and exhibition will be back in New Orleans, returning to the city after a four-year absence. From Nov. 16-21, SC14 will host more than 10,000 participants, to exchange the newest results in HPC research, to meet their world-wide peers, and to gain information about new and innovative HPC products. For HPC vendors, SC14 is the biggest forum to present new HPC hardware, software and innovations, and a unique opportunity to meet their global customers.

 

As usual, Intel will have a large booth with many activities highlighting the pace of discovery and innovation including compelling demos showcasing topics like climate and environmental modeling and airflow simulations from end users, as well as interesting informal Collaboration Hub discussions with Intel and industry experts, and short theater presentations on a variety of topics surrounding code modernization. A schedule of Intel activities can be found here.

 

Intel‘s Innovation, Architecture and Pathfinding Group (IPAG), led by Curt Aubley (VP and CTO of the Data Center Group and GM of IPAG) will have a strong presence at SC14. This group is looking into the future of HPC and exascale computing, with a focus on low-power processors and interconnects, innovative SW concepts and various technology projects with the U.S. Government and in Europe. Come and meet IPAG engineers and architects to discuss recent developments:

  • IPAG will be holding a BOF session on PGAS APIs. While the PGAS model has been around for some time, widespread adoption of PGAS by HPC developers is light. Advances in PGAS APIs promise to significantly increase PGAS use, avoiding the effort and risk involved in adopting a new language. This BOF (Wed 5:30-7:00pm) gives a concise update on progress on PGAS communication APIs and presents recent experiences in porting applications to these interfaces.
  • One of IPAG’s collaborations in Europe with CERN concerns “Data Intensive HPC”, which is relevant in scenarios like the CERN’s Large Hadron Collider (LHC) or the Square Kilometer Array (SKA). Niko Neufeld from CERN will present details at the Intel theatre (Wed at 5:45). In addition we will host a “Community Hub” discussion at the Intel booth (Wed 10-12am) with Happy Sithole, one of the thought leaders of the SKA project. These are informal discussions, meant to generate interest and thought exchange.
  • Another example of IPAG’s engagements in Europe is the Dynamical Exascale Entry Platform (DEEP) project, funded by the EU 7th framework programme, www.deep-project.eu. The goal is to develop a novel, Exascale-enabling supercomputing platform. At SC14 DEEP will present its results at the joint booth of the European Exascale projects booth 1039. Also at booth 1039, project EXA2CT (EXascale Algorithms and Advanced Computational Techniques) will give a status update on modular open source proto-applications, with Intel IPAG as a key partner.
  • Shekhar Borkar (Intel Fellow and Director of Extreme-scale Technologies) will be sitting in on the Future of Memory Technology for Exascale and Beyond II panel on Wednesday at 3:30 in room 383-84-85. The panel will be discussing how memory technology needs to evolve to keep pace with compute technology in the coming exascale era.
  • The IPAG team is also participating in a BOF session on Thursday at 12:15 in room 294 on a future runtime standard for HPC exascale (ExaFLOPS) machines. This is the Open Community Runtime (OCR) work being developed as a new industry standard, supported by the US Dept. of Energy.

 

Stop by booth 1315 to engage with the IPAG (and the larger Intel HPC team) on any of these topics. We hope to see you in New Orleans!

Read more >

It is a good week for trade and building economies around the globe

Intel applauds progress on WTO Trade Facilitation Agreement: The agreement announced yesterday between the United States and India represents important progress toward removing the stalemate on implementation of a robust WTO Trade Facilitation Agreement (TFA).  Such an agreement would benefit … Read more >

The post It is a good week for trade and building economies around the globe appeared first on Policy@Intel.

Read more >

Super Compute is rising, host bus adaption (HBA’s) will start fading

The world of storage is tiered, and it will become more distinctly tiered in the years ahead as the ability to manage hot data will evolve onto the PCIe bus and away from the sub-optimal SAS and SATA buses designed for traditional disk platter based storage. Using the right bus for your hot tier is very important. Up till 2014 most implementations of SSD storage have been on SAS and SATA buses which are not designed for fast non-volatile memory (NVM). What’s been required is more standardization around the actual processor’s host bus, called PCIe. A generational shift towards PCIe from Intel is now evolving.

storagevision.png


Intel is evolving the world of PCIe and its extensibility across the necessary layers, so that PCIe can truly be a more appropriate storage bus for going wide with more devices. Thus blending network, storage, co-processors and graphics devices all via this host bus. The classic storage need for adaption from SAS or SATA back up towards PCIe and the processor will slowly fade as the server generations evolve in the years ahead. We’ll see Intel-backed standards, platform additions and PCIe bridges and switches, which will start the unstoppable evolutions of putting storage closer with much less storage latency to the processor.

 

Tomorrow’s super and parallel computing can only be made a reality with denser, more efficient compute power. NVM storage will play its part by being on the CPU’s host bus. This storage will be more power efficient and parallel. The future has started with Intel SSD for PCIe, come check us out at the Intel booth #1315 at SC 2014 and talk to Andrey Kudryavtsev, John Ryan or myself. We’ll be there live to show you demo’s, samples and explain the system shifts and the benefits of the new storage protocol standard, NVMe as well.

Read more >

SC14: The Analysis Challenge of the $1,000 Genome

As SC14 approaches, we have invited industry experts to share their views on high performance computing and life sciences. Below is a guest post from Mikael Flensborg. Director, Global Partner Relations at CLC bio, a Qiagen Company. During SC14, Mikael will be sharing his thoughts on genomic and cancer research in the Intel booth (#1315). He is scheduled in the Intel Community Hub on Tuesday, Nov. 18, at 3 p.m. and Wednesday, Nov. 19, at 3 p.m., plus in the Intel Theater on Tuesday at 2:30 p.m.

 

Eight months have now passed since Illumina announced the long expected arrival of the $1,000 genome with the launch of the HiSeq X Ten sequencing instrument, which is also denoted as a new era in High Throughput Sequencing with focus on a new wave of population-level genomic studies. MF-2010.png

 

In order to keep the costs down to the “magic” $1,000 level, it is required to have a full HiSeq X Ten installation plow through vast 18,000 full human genomes per year, which means a completion of each full run every 32 minutes. With focus on such a high volume, the next very important question arrives:

 

What does it take to keep up with such a high throughput on the data analysis side?

 

According to Illumina’s “HiSeq X Ten Lab Setup and Site Prep Guide (15050093 E)”, the requirements for data analysis are specified to be a compute cluster with 134 compute nodes (16 CPU cores @ 2.0 GHz, 128 GB of memory, 6 x 1 terabyte (TB) hard drives) based on an analysis pipeline consisting of the tools BWA+GATK.

 

At QIAGEN Bioinformatics we decided to take on the challenge of benchmarking this, based on a workflow (Trim, QC for sequencing reads, Read Mapping to Reference, Indels and Structural Variants, Local Re-alignment, Low Frequency Variant Detection, QC for Read Mapping) of tools on CLC Genomics Server  (http://www.clcbio.com/products/clc-genomics-server/) running on a compute cluster with Intel® Enterprise Edition for Lustre* filesystem, InfiniBand, Intel® Xeon® Processor E5-2697 v3 @ 2.60GHz, 14 CPU cores, 64GB of memory, and Intel® SSD DC S3500 Series 800GB.

 

We based our tests on a publicly available HiSeq X Ten dataset  and we have reached the conclusion that based on these specifications we can follow the pace of the instrument with a compute cluster of only 61 compute nodes.

 

Given our much lower compute node needs, these results can have a significant positive impact on the total cost of ownership of the compute infrastructure for a HiSeq X Ten customer, which includes hardware, cooling, space, power, and systems maintenance to name a few variable costs.

 

What questions do you have?

Read more >

Capital Summit – Bringing Together the Forces of Innovation

I just spent the past week at the Intel Capital Global Summit. It was an excellent event where companies interested in innovation, venture capitals, and startups met to network and discuss new trends. Overall, this experience served as proof that innovation is still alive and well around the world.

 

If you have seen any of my past blogs on the topic of innovation, you will know that I believe there are three pillars necessary for innovation:

 

  1. Commitment: It is important that innovation is championed through executive support and ultimately with an investment of funding and resources.
  2. Clarity: An understanding of which specific problems need to be solved and how to fail fast to eventually get to the solution is vital for innovation.
  3. Culture: The organization needs to be supported in the area of failure. It is through trial and error along with the eventual learnings that are derived from failure that encourages innovation.

 

It was exciting to see all three demonstrated very clearly at the Intel summit.

Ed.jpg

Innovation Starts with Executive Understanding…

 

Through a series of organized meet and greet sessions, I had the opportunity to talk with many companies at the event. It was incredible to see the level of clarity demonstrated by the CEOs and executives of some of these companies. Plans of development and go-to-market strategies were well defined and clear. Additionally, these company leaders displayed an exceptional understanding of what problems they’re working on and the details on how they’re solving them.

 

But in every one of these cases, there was a common belief that the real innovation begins once the customer gets a hold of new technology. This is the point at which true understanding and the collision of ideas can occur. The specific problems are discovered as customers bring additional information to the discussion that can help companies hone in on legitimately scalable solutions.

 

…And a Company Culture That Embraces Strategic Change

 

Throughout the event, companies also met with each other to discuss how technology can be used to enhance solutions and better address some of the real problems faced by customers. It was apparent from the discussions that all of the CEOs were passionate about solving customer problems with the technologies that they are using.

 

This concept of ideas coming together to enhance and evolve a solution is very well outlined in Stephen Johnson’s video on the “slow hunch.” Rare is the occasion when someone conceives a brilliant idea in the shower (think Doc Brown in “Back to the Future”). More common is the process of a great idea starting from a seed, growing through a wide range of interactions, and eventually developing into something that is key to individual or company success.

 

Interested in innovation and the world of venture capital? Consider the Intel Capital Global Summit for next year. It can prove to be a significant gateway to network with these innovative companies. See how they can help you and how you can help them.

 

See you there,


Ed

 

Follow me on Twitter at @EdLGoldman and use #ITCenter to continue the conversation.

Read more >