Recent Blog Posts

The World is Your Office: A Study in Telecommuting

macdesk.gifImage source: bestreviews.com


If you look down at your workspace right now and analyze the way it has changed in the past few decades, you’ll likely be amazed by the contrast. Technology has given us the capacity to eliminate waste and optimize our workplaces for productivity, but it has also fundamentally changed the way we work. Less ties to a physical desk in a physical workspace has led to an upswing in the mobile workforce. According to the “The State of Telework in the U.S.” — which is based on U.S. Census Bureau statistics — businesses saw a 61% increase in telecommuters between 2005 and 2009.

 

IT decision makers have witnessed this growth from the trenches, where they enable the business to grow through technological advancements.  But there are several key questions IT leaders will face in the coming waves of virtualization…

 

  • What type of work model should be used to manage knowledge workers?
  • When workers are increasingly distributed globally at multiple physical locations, how do effective interpersonal relationships form and grow?
  • How will technology and people considerations impact the locations where people come together?
  • How can the office environment be configured to invoke optimum worker productivity?
  • How will organizations source the best workers and cope with differing attitudes across a five-generation workforce?

 

Telecommuters Today

 

Though there are a significant number of mobile workers today, the number is still small in comparison to what it will be one day. According to the “The State of Telework in the U.S.” 50 million U.S. employees work jobs that are telework compatible, but only 2.9 million consider home their primary place of work. This represents 2.3 percent of the workforce. Meaning the full impact of virtualization has yet to be realized.

 

Some are dubious as to whether the workplace will continue to move in a virtualized direction. Rawn Shah, director and social business architect at Rising Edge, recently wrote on Forbes, “We are only starting to understand what the future of work looks like. In my view, the imagined idea of entirely virtual organizations is similar to how we used to think of the future as full of flying cars and colonies in space. Reality is much more invested in hybrid in-office plus remote scenarios. Physical space is still a strong element of work that we need to keep track of, and understand better to learn how we truly collaborate.”

 

Telecommuters Tomorrow

 

According to Tim Hansen in his white paper “The Future of Knowledge Work,” there are already several trends influencing the current workplace that will directly impact virtualization of the enterprise in the future:

 

  • Defining employees on the cusp of transformation
  • Dynamic, agile team structures will become the norm
  • The location of work will vary widely
  • Smart systems will emerge and collaborate with humans
  • A second wave of consumerization is coming via services

 

The questions IT leaders are asking now can be answered by isolating these already-present factors driving virtualization.

 

Our offices are changing rapidly — don’t let your employees suffer through legacy work models. Recognizing the change swirling around you will help you strategize for the coming changes on the horizon.

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

Chief Human Resources Officers will be the Next Security Champion in the C-suite

HR and security? Don’t be surprised. Although a latecomer to the security party, HR organizations can play an important role in protecting assets and influencing good security behaviors. They are an influential force when managing risks of internal threats and excel at the human aspects which are generally snubbed in the technology heavy world of cybersecurity. At a recent presentation given to the CHO community, I discussed several overlapping areas of responsibilities which highlight the growing importance HR can influence to improve the security posture of an organization. 

 

The audience was lively and passionate in their desire to become more involved and apply their unique expertise to the common goal.  The biggest questions revolved around how best they could contribute to security.  Six areas were discussed.  HR leadership can strengthen hiring practices, tighten responses for disgruntled employees, spearhead effective employee security education, advocate regulatory compliance and exemplify good privacy practices, be a good custodian of HR data, and rise to the challenges of hiring good cybersecurity professionals.  Wake up security folks, the HR team might just be your next best partner and a welcomed advocate in the evolving world of cybersecurity

 


Pivotal-Role-of-HR-in-Cybersecurity from Matthew Rosenquist

 

 

Presentation available via SlideShare.net: http://www.slideshare.net/MatthewRosenquist/pivotal-role-of-hr-in-cybersecurity-cho-event-nov-2014

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

My Blog: Information Security Strategy

Read more >

SC14: Understanding Gene Expression through Machine Learning

This guest blog is by Sanchit Misra, Research Scientist, Intel Labs, Parallel Computing Lab, who will be presenting a paper by Intel and Georgia Tech this week at SC14.

 

Did you know that the process of winemaking relies on yeast optimizing itself for survival? When we put yeast in a sugar solution, it turns on genes that produce the enzymes that convert sugar molecules to alcohol. The yeast cell makes a living from this process (by gaining energy to multiply) and humans get wine.

 

This process of turning on a gene is called expression. The genes that an organism can express are all encoded in its DNA. In multi-cellular organisms like humans, the DNA of each cell is the same, but cells in different parts of the body express different genes to perform the corresponding functions. A gene also interacts with several other genes during the execution of a biological process. These interactions, modeled mathematically using “gene networks,” are not only essential in developing a holistic understanding of an organism’s biological processes, they are invaluable in formulating hypotheses to further the understanding of numerous interesting biological pathways, thus playing a fundamental role in accelerating the pace and diminishing the costs of new biological discoveries. This is the subject of a paper presented at the SC14 by Intel Labs and Georgia Tech.

 

Owing to the importance of the problem, numerous mathematical modeling techniques have been developed to learn the structure of gene networks. There appears, not surprisingly, to be a correlation between the quality of learned gene networks and the computational burden imposed by the underlying mathematical models. A gene network based on Bayesian networks is of very high quality but requires a lot of computation to construct. To understand Bayesian networks, consider the following example.

 

A patient visits a doctor for diagnosis with symptoms A, B and C. The doctor says that there is a high probability that the patient is suffering from ailments X or Y and recommends further tests to zero in on one of them. What the doctor does is an example of probabilistic inference, in which the probability that a variable has a certain value is estimated based on the values of other related variables. Inference that is based on Bayes’ laws of probability is called Bayesian inference. The relationships between variables can be stored in the form of a Bayesian network. Bayesian networks are used in a wide range of fields including science, engineering, philosophy, medicine, law, finance, etc. In the case of gene networks, the variables are genes and the corresponding Bayesian network models for each gene what other genes are related to it and what is the probability of expression of the gene given the expression values of the related genes.

 

Through a collaboration between Intel Labs’ Parallel Computing Lab and researchers at Georgia Tech and IIT Bombay, we now have the first ever genome-scale approach for construction of gene networks using Bayesian network structure learning. We have demonstrated this capability by constructing the whole-genome network of the plant Arabidopsis thaliana from over 168.5 million gene expression values by computing a mathematical function 7.3 trillion times with different inputs. For this, we collected a total of 11,760 Arabidopsis gene expression datasets (from NASC, AtGenExpress and GEO public repositories). A problem of this scale would have consumed about six months using the state-of-the-art solution. We can now solve the same problem in less than 3 minutes!

 

To achieve this, we have not only scaled the problem to a much bigger machine – 1.5 million cores of Tianhe-2 supercomputer with 28 PFLOP/s peak performance, we also applied algorithm-level innovations including avoiding redundant computation, a novel parallel work decomposition technique and dynamic task distribution. We also made implementation optimizations to extract maximum performance out of the underlying machine.

 

sanchit image3.jpg

 

sanchit image 2.jpg

 

  • (Top)    Root Development subnetwork                 (Bottom) Cold Stress subnetwork

 

Using our software, we generated gene regulatory networks for several datasets – subsets of the Arabidopsis dataset – and validated them using known knowledge from the TAIR (The Arabidopsis Information Resource) database. As a demonstration of the validity and how genome-scale networks can be used to aid biological research, we conducted the following experiment. We picked the genes that are known to be involved in root development and cold stress and randomly picked a subset of those genes (red nodes in the above figures). We took the whole-genome network generated by our software for Arabidopsis and extracted subnetworks that contain our randomly picked subset of genes and all the other genes that are connected to them. The extracted subnetworks contain a rich presence of other genes known to be in the respective pathways (green nodes) and closely associated pathways (blue nodes), serving as a validation test. The nodes shown in yellow are genes with no known function. Their presence in the root development subnetwork indicates they might function in the same pathway. The biologists at Georgia Tech are performing experiments to see if the genes corresponding to yellow nodes are indeed involved in root development. Similar experiments are being conducted for several other biological processes.

 

Arabidopsis is a model plant for which NSF had launched a 10 year initiative in 2000 to find the functions of all of its genes, yet the functions of 40 percent of its genes are still not known. This method can help accelerate the discovery of the functions of the rest of the genes. Moreover, it can easily be scaled to other species including human beings. The understanding of how genes function and interact with each other in a broad variety of organisms can pave the way for new medicines and treatments. Moreover, we can also compare the gene networks across organisms to enhance our understanding of the similarities and differences between them ultimately aiding in a deeper understanding of evolution.

 

What questions do you have?

Read more >

On the Ground at SC14: Opening Plenary Session and Exhibition Opening Gala

I felt a little like the lady from the old Mervyn’s commercials chanting, “OPEN, OPEN, OPEN” today while waiting for the Exhibition Gala at SC14. The exhibitor’s showcase is one of the most exciting aspects for Intel – we have a pretty large presence on the floor so we can fully engage and collaborate with the HPC community. But before we delve too deep into the booth activities, I want to step back and talk a little about the opening plenary session from SGI.

 

Dr. Eng Lim Goh, senior vice president and CTO at SGI, took the stage to talk about the most fundamental of topics: Why HPC Matters. While most of the world thinks of supercomputing as the geekiest of technology (my bus driver asked if I worked on the healthcare.gov site or did some hacking), we as an industry know that much of what is possible today in the world is enabled by HPC in industries as diverse as financial services, advanced/personalized medicine, and manufacturing.

 

Dr. Goh broke his presentation into a few parts: Basic needs, reducing hardships, commerce, entertainment and profound questions. He then ran through about 25 projects utilizing supercomputing, everything from sequencing and analyzing the wheat genome (7x the size of the human genome!) to checking postage accuracy for the USPS (half a billion pieces of mail sorted every day) to designing/modeling a new swimsuit for Speedo (the one that shattered all those world records in the Beijing Olympics). Dr. Goh was joined on stage by Dr. Piyush Mehrotra, from NASA’s Advanced Supercomputing Division, who was there to discuss some of the ground breaking research that NASA has done in climate modeling and the search for exoplanets (about 4,000 possible planets found so far by the Kepler Mission).

 

Increasing wheat yield by analyzing the genome

 

Earthquake simulations can help give advanced warning

 

The session closed with a call to the industry to make a difference and to remember that it’s great to wow a small group of people to secure funding for supercomputing, but it is also important to, in the simplest terms, “delight the many” when describing why HPC matters.

 

So why does HPC matter in the oil and gas industry? After Dr. Goh’s presentation, I finally headed into the showcase and to the Intel booth to talk to the folks from DownUnder GeoSolutions. The key to success in the oil and gas industry is minimizing exploration costs while maximizing oil recovery. DownUnder GeoSolutions has invested in modernizing its software—optimizing it to run heterogeneously on Intel Xeon and Intel Xeon Phi coprocessors. As a result, its applications are helping process larger models and explore more options in less time. DUG is the marque demo this year in the Intel booth, showing their software, DUG Insight, running on the full Intel technical computing portfolio, including workstations, Intel Xeon and Xeon Phi processors, Lustre, Intel Solid State Drives and Intel True Scale Fabric.

 

 

Above and below: DownUnder GeoSolutions demo

 

 

Of course, checking out the DUG demo isn’t the only activity in the Intel booth. There were also a couple of great kick off theater talks from Jack Dongarra discussing the MAGMA project, which aims to develop a dense linear algebra library and improve performance for co-processors and Pierre Lagier from Fujitsu on The 4 Dimensions of HPC Computing. He presented a use case for the running elsA CFD software package on Intel Xeon Phi co-processors and the performance gains they were able to see with some tuning and optimization.

 

Jack Dongarra on the MAGMA project

 

Pierre Lagier on elsA CFD

 

And speaking of optimization, the big draw of the night in the Intel booth was the opening round of the Parallel Universe Computing Challenge, which saw defending champs the Gaussian Elimination Squad from Germany taking on the Invincible Buckeyes from Ohio. After a round of 15 HPC trivia questions (more points the faster teams answer), GES was in the lead. During the coding challenge, each team has 10 minutes to take a piece of code from Intel’s James Reinders and speed up either/both Xeon and Xeon Phi performance with 40 Xeon and 244 Xeon Phi threads available on a duel-socket machine. With a monster speed up of 243.008x on Xeon Phi (James admitted he’d only gotten to 189x), the Gaussian Elimination Squad took home the victory by a final score of 5903 to 3510. A well-played match by both teams!

 

Crowd watching the PUCC

 

L to R: Gaussian Elimination Squad, James Reinders and Mike Bernhardt

 

The PUCC continues on Tuesday, along with the Community Hub discussions, theater talks, fellow traveler tours and technical sessions. Stop by the booth (1315) and tell us why you think HPC matters!

 

 

Read more >

Transform IT Episode 5 Recap: The Intersection of Technology and Humanity

What does it mean to be a futurist?

 

In episode 5 of the Transform IT show I sat down with James Jorasch and Rita J. King, powerhouse couple, futurists and founders of Science House. We talked about their unique and distinct journeys that led them both to this point at which, in their view, technology and humanity are intersecting.

Behind the Scenes - Transform IT.JPG

We talked about the perspectives that cause then to be viewed as “futurists” and what we can each do to develop that viewpoint. Most importantly, we talked about how important it is to recognize that we are in control. That we can shape everything: our culture, our future and our destiny.

 

They challenged us to move ourselves away from those things that are known and comfortable and to be willing to immerse ourselves in the unknown. They challenged us to employ diligent practice to develop our skills, but to not stop there – to then expand our horizons into seemingly disconnected areas. And then to simply let things simmer and percolate.

 

It was a fascinating look into a different way of seeing the world around us.

 

Perhaps the greatest challenge came from Rita as she encouraged each of us to “have an adventure”. When was the last time in our busy, corporate lives that we thought of ourselves as on an adventure?

 

Probably never.

 

Yet her challenge was a way of reminding us that the only way to prepare for an uncertain and rapidly evolving future is to put ourselves in a state of adventure. We need to be open to new ideas and new perspectives. We need to be willing to challenge the status quo and be open to connections that might not seem to make any sense on the surface.

 

It can be a tall order for most IT professionals. We’re more comfortable with things that we can see and touch. But I believe that we need to embrace this kind of perspective if we are going to remain relevant as our world transforms around us.

 

And I believe that’s what it means to be a futurist.

 

A futurist is not someone that knows what the future holds. But rather, a futurist is simply someone who is in a constant state of exploration about the future and who is open to wherever that future may lead.

 

So the question for you is how will you rise to this challenge? How will you begin your adventure and begin thinking a bit more like a futurist tomorrow?

 

Share your “morning action” with us in the comments section below. And you can also join in the conversation any time on Twitter using the hashtags #ITChat and #TransformIT.

 

And, if you missed episode 5 of the Transform IT show, you can watch it here. Also, make sure to tune in on December 2 when I’ll be talking to Brian Vellmure on becoming an IT outsider.

 

If you haven’t had a chance to read my book, “The Quantum Age of IT”, you can download the first chapter for free here.

Read more >

On the Ground at SC14: The Intel HPC Developer Conference

SC14 is officially under way, however the Intel team got in on the action a bit early and I’m not just talking about the set up for our massive booth (1315 – stop by to see the collaboration hub, theater talks and end-user demos). Brent Gorda, GM of the High Performance Data Division, gave a presentation at HP-Cast on Friday on the Intel Lustre roadmap. A number of other Intel staffers gave presentations ranging from big data to fabrics to exascale computing, as well as a session the future of the Intel technical computing portfolio. The Intel team also delivered an all-day workshop on OpenMP on the opening day of SC14.

 

On Sunday, Intel brought together more than 350 members of the community for an HPC Developer Conference at the Marriott Hotel to discuss key topics including high fidelity visualization, parallel programming techniques/software development tools, hardware and system architecture, and Intel Xeon Phi coprocessor programming.

 

The HPD Developer Conference kicked off with a keynote from Intel’s Bill Magro discussing the evolution of HPC – helping users gain insight and accelerate innovation – and where he thinks the industry is headed:

 

 

Beyond what we think of as traditional research supercomputing (weather modeling, genome research, etc.) there is a world of vertical enterprise segments that can also see massive benefits from HPC. Bill used the manufacturing industry as an example – SMBs of 500 or less employees could see huge benefits from digital manufacturing with HPC but need to get beyond cost (hardware/apps/staff training) and perceived risk (no physical testing is a scary prospect). This might be a perfect use case for HPC in the cloud – pay as you go would lessen the barriers to entry, and as the use case is proved and need grows, users can move to a more traditional HPC system.

 

Another key theme for the DevCon was code modernization. To truly take advantage of coprocessors, apps need to be parallelized. Intel is working with the industry via more than 40 Intel Parallel Computing Centers around the world to increase parallelism and scalability through optimizations that leverage cores, caches, threads, and vector capabilities of microprocessors and coprocessors. The IPCCs are working in a number of areas to optimize code for Xeon Phi including aerospace, climate/weather modeling, life sciences, molecular dynamics and manufacturing. Intel also recently launched a catalog of more than 100 applications and solutions available for the Intel Xeon Phi coprocessor.

 

Over in the Programming for Xeon Phi Coprocessor track, Professor Hiroshi Nakashima from Kyoto University gave a presentation on programming for Xeon Phi on a Cray XC30 system. The university has 5 supercomputers (Camphor, Magnolia, Camellia, Laurel, and Cinnamon), with this talk covering programming for Camellia. The main challenges faced by Kyoto University were programming for: Inter-node = tough, intra-node = tougher, and intra-core = toughest (he described having to rewrite innermost kernels and redesign data structure for intra-core programming). He concluded that simple porting or large scale multi-threading may not be sufficient for good performance and SIMD-aware kernel recoding/redesign may be necessary.

 

Professor Hiroshi Nakashima’s slide on the Camellia supercomputer


Which brings me to possibly the hottest topic of the Developer Conference: the next Intel Xeon Phi processor (codename Knights Landing). Herbert Cornelius and Avinesh Sodani took the stage to give a few more details on the highly-anticipated processor arriving next year:

  • It will be available as a self-boot processor alleviating PCI Express bottlenecks, a self-boot processor + integrated fabric (Intel Omni-Path Architecture), or as an add-in card
  • Binary compatible with Intel Xeon processors (runs all legacy software, no recompiling)
  • The new core is Silvermont microarchitecture-based with many updates for HPC (offering 3x higher ST performance over current generation Intel Xeon Phi coprocessors)
  • Offers improved vector density (3+ teraflops (DP) peak per chip)
  • AVX 512 ISA (new 512-bit vector ISA with Masks)
  • Scatter/Gather engine (enabling hardware support for gather/scatter)
  • New memory technology MCDRAM + DDR (large high bandwidth memory – MCDRAM and huge bulk memory – DDR)
  • New on-die interconnect – MESH (high BW connection between cores and memory)

 

Next Intel Xeon Phi Processor (codename Knights Landing)

 

Another big priority for Intel is high fidelity visualization and measuring/modeling as an increasingly complex phenomenon. Jim Jeffers led a track on the subject and gave an overview presentation covering a couple of trends (increasing data size – no surprise there, and increasing shading complexity). He then touched on Intel’s high fidelity visualization solutions including software (Embree, the foundation for ray tracing in use by DreamWorks, Pixar, Autodesk, etc.) and efficient use of compute cluster nodes. Jim wrapped up by discussing an array of technical computing rendering tools developed by Intel and partners, which are all working to enable higher fidelity, higher capability, and better performance to move visualization work to the next level.

 

Jim Jeffers’s Visualization Tools Roadmap


These are just a few of the more than 20 sessions and topics (Lustre! Fabrics! Intel Math Kernel Library! Intel Xeon processor E5 v3!) at the HPC Developer Conference. The team is planning to post presentations to the Website in the next week, so check back for conference PDFs. If you attended the conference, please fill out your email survey – we want to hear from you on what worked and what didn’t. And if we missed you this year, drop us an email (contact info is at the bottom of the conference homepage) and we’ll make sure you get an invite in the future.

Read more >

SC14 Podcast: How HPC Impacts Alzheimer’s Research

 

With SC14 kicking off today, it’s timely to look at how high performance computing (HPC) is impacting today’s valuable life sciences research. In the above podcast, Dr. Rudy Tanzi, the Joseph P. and Rose F. Kennedy Professor of Neurology at Harvard Medical School and the Director, Genetics and Aging Research Unit at the MassGeneral Institute for Neurodegenerative Disease, talks about his pioneering research in Alzheimer’s disease and how HPC is critical to the path forward.

 

Listen to the conversation and hear how Dr. Tanzi says HPC still has a ways to go to provide the compute power that life sciences researchers need. What do you think?

 

What questions about HPC do you have? 

 

If you’re at SC14, remember to come by the Intel booth (#1315) for life sciences presentations in the Intel Community Hub and Intel Theater. See the schedules here.

Read more >

SGI has built a revolutionary system for NVMe storage scaling at SC14!

Intel launched its Intel® Solid-State Drive Data Center Family for PCIe  based on the NVMe specification in June, 2014. But in the world of amazing possibilities with these products, we still want more. Why? A half million IOPS and three gigs per second out of a single card is not enough for Super Compute workloads, right? Not always, and not for every application. Here are a couple of reasons why we need more performance and how that’s possible.  We really want to scale both the performance and the density.

 

Consistent performance is the answer to the question. Intel SSDs help to deliver consistent performance across different workloads, including mixed ones, which is worst-case scenario for a drive. That’s applicable to a wide range of Data Center products no matter SATA or PCIe. Performance scaling of SATA SSDs is limited by HBA or RAID controller performance, SAS topology and related interface latency. You can scale it linearly in a limited range until the threshold is reached. After that you realize nothing but increased access latency for the RAID configuration. The single Intel PCIe SSD (our P3700) product line can outperform at least 6 SATA SSDs (S3700) on a range of 4K random workloads while maintaining a lower latency than a single SATA SSD. (See the diagram below)

01.png

1 Source: Intel. Measurements made on Hanlan Creek (Intel S5520HC) system with two Intelâ Xeon X5560@ 2.93GHz and 12GB (per CPU) Mem running RHEL6.4 O/S, Intel S3700 SATA Gen3 SSDs are connected to LSI* HBA 9211, NVMe* SSD is under development, data collected by FIO* tool

 

But then how does the performance scale with multiple drives within one system? Given the benefit of latency reduction due to transition to PCIe interface, NVMe protocol and high QoS of the P3x00 product line, it’s hard to predict how far we can take this.

 

Obviously, we have a limited amount of PCIe lanes per CPU, which depends on the generation and CPU architecture as well as system, thermal and power architecture. Each P3700 SSD takes PCIe Gen3 x4. In order to evaluate the scaling of NVMe SSDs we would like to avoid using PCIe switches and multiplexers. How about a big multi-socket scale-up system based on 32 Xeon E7 CPUs as a test platform? Looks very promising for the investigation of the NVMe scaling.

 

SGI has presented an interesting All-Flash concept at SC14. It includes a 32 socket system with 64 Intel® Solid-State Drive DC P3700 800GB SSDs, running a single SLES 11 SP3 Linux OS.

http://www.intel.com/content/www/us/en/solid-state-drives/intel-ssd-dc-family-for-pcie.html

 

That offers a great opportunity to see what happens with the performance scaling inside this massive single image system. Turns out, it’s a true record of 30M IOPS on 4K RR workload! Let’s have a look at the scaling progression here:

03.png

The data above was measured at SGI Labs on concept platform based on 32 E7 Xeon CPUs.

 

This chart represents IOPS and GB/s to the number of SSDs scaling on 4k random read workloads (blue line) and 128K random read workload (red line), from the testing done at SGI’s labs. Each SSD on the add-in-card form factor of PCIe card works independently. It’s like its own controller. We’re not worried about an additional software RAID overhead, and are only interested to see the raw device performance. Dotted lines represent linear approximation while the solid lines connect the dots from experimental tests.

 

Hard to believe? Come to SGI’s booth 915 and talk to them about it.

Read more >