Recent Blog Posts

Upgrade A NVME Capable Linux Kernel

Here in Intel NVM and SSD group (NSG) we build and test Linux systems a lot and we’ve been working to mature the nvme driver stack on all kinds of operating systems. The Linux kernel is the innovation platform today, and it has come a long way now with NVMe stability. We always had a high level kernel build document but never in a blog (bad Intel, we are changing those ways). We also wanted to refresh it a bit as maturity is well along now with NVMe and Linux. Kernel 3.10 (*spring 2014) is when integration really happened, and the important data center Linux OS vendors are fully supporting the driver. In case you are on a 2.6 kernel and want to move up to a newer kernel, here are the steps to build a kernel for your testing platform and try out one of Intel’s Data Center SSD’s for PCIe and NVMe.. This assumes you want the latest and greatest for testing and are not interested in an older or vendor supported kernel. By the way, on those “6.5 distributions” you won’t be able to get a supported 3.x kernel, that’s one reason I wrote this blog. But it will run and allow you test with something newer. You may have your own reasons I am sure. As far as production goes you will probably want to make sure you work together with your OS vendor.


I run a 3.16.3 kernel on some of the popular 6.5 distros, you can too.


1.    NVM Express background

The NVM express (NVMe) is optimized PCI Express SSD interface, NVM Express specification defines an optimized register interface, command set and feature set for PCI express (PCIe)-based Solid State Drives(SSD). Please refer for background on NVMe.

The NVM Express Linux driver development utilizes the typical open-source process used by The development mailing list is

The Linux NVMe driver intercepts kernel 3.10 and integrates to kernels above 3.10.


2.    Development tools required (possible pre-requisites)

In order to clone, compile and build new kernel/driver, the following packages are needed

  1. ncurses
  2. build tools
  3. git  (optional you could be using wget to get the Linux package)

You must be root to install these packages  

Ubuntu based

apt-get install git-core build-essential libncurses5-dev  

RHEL based

yum install git-core ncurses ncurses-develyum install groupinstall “Development Tools”  

SLES based        

zipper install ncurses-devel git-core              zipper install –type pattern Basis-Devel


3.    Build new Linux kernel with NVMe driver

Pick up a starting distribution, it doesn’t matter from driver’s perspective which distribution you use since it is going to put a new kernel on top of it, so use whatever you are most comfortable with and/or has the tools required.Get kernel and driver

  1. Or you can download “snapshot” from the top commit (here’s an example)


            tar –xvf linux-3.16.3.tar.xz


    2.      Build and install

Run menuconfig (which uses ncurses):

make menuconfig

Confirm the NVMe Driver under Block is set to <M>

Device Drivers-> Block Devices -> NVM Express block device

This creates .config file in same directory.

Then, run as root these make commands (use the j flag as ½ your cores to improve make time)

make –j10

make modules_install –j10

make install –j10


Depending on distribution you use, you may have to run update-initramfs and update-grub, but this is typically unnecessary. Once install is successful, reboot system to load new kernel and drivers. Usually the new kernel becomes default to boot which is the top line of menu.lst. Verify it with “uname –a” after booting, that the  running kernel is what you expect. , Use “dmesg | grep –i error”  and resolve any kernel loading issues.


4.  NVMe Driver basic tests and tools

          There are some basic open source nvme test programs you can use for checking nvme devices:

          Git’ing source codes

git clone git://

Making testing programs

Add/modify Makefile with proper lib or header links and compile these programs



Example, check nvme device controller “identify”, “namespace” etc

>>sudo ./nvme_id_ctrl /dev/nvme0n1

>>sudo ./nvme_id_ns /dev/nvme0n1


Intel SSD Data Center Tool 2.0 supports NVMe


Here are more commands you’ll find useful.

Zero out and condition a drive sequentially for performance testing:

dd if=/dev/zero of=/dev/nvme0n1 bs=2048k count=400000 oflag=direct

Quick test a drive, is it reading at over 2GB a second?

hdparm -tT –direct /dev/nvme0n1


Again enjoy these Gigabyte/s class SSD’s with low microsecond controller free performance!

Read more >

Quick Q & A: Safer Highways, Smarter Cities, and Fewer Traffic Jams

It’s fascinating to think about how innovations in transportation, including the trend toward Internet of Things implementations, can enhance the quality of life for people across the globe. Indeed, technology can help us address significant challenges around really important aspects … Read more >

The post Quick Q & A: Safer Highways, Smarter Cities, and Fewer Traffic Jams appeared first on IoT@Intel.

Read more >

Health IT Does Not Transform Healthcare; Healthcare Cannot Transform Without Health IT

Below is a guest post from Steven E. Waldren, MD MS.


I was listening to the Intel Health videocast[1] of Eric Dishman, Dr. Bill Crounse, Dr. Andy Litt, and Dr. Graham Hughes. There was an introductory line that rang true, “EHR does not transform healthcare.” This statement prompted me to write this post.


The healthcare industry and policy makers have frequently seen health information technology (health IT) as a relatively easy fix to the quality and cost issues plaguing the U.S. health system. If we adopt health IT and make it interoperable, we will drastically improve quality and lower cost. Research provides evidence that health IT can do both.


I believe, however, that interpretation of this research misses a very important dependent variable; that variable is the sociotechnical system within which the health IT is deployed. For the uninitiated, Wikipedia provides a good description of a sociotechnical system.[2] In essence, it is the system of people, workflow, information, and technology in a complex work environment. Healthcare is definitely a complex adaptive environment[3]. To put a finer point on this, if you deploy health IT in an environment in which the people, workflow, and information are aligned to improve quality and lower cost, then you are likely to see those results. On the other hand, if you implement the technology in an environment in which the people, workflow, and information are not aligned, you will likely not see in either area.


Another reason it is important to look at health IT as a sociotechnical system is to couple the provider needs and capabilities to the health IT functions needed. I think, as an industry, we have not done this well. We too quickly jump into the technology, be it patient portal, registry, or e-prescribing, instead of focusing on the capability the IT is designed to enable, for example, patient collaboration, population management, or medication management, respectively.


Generally, the current crop of health IT has been focused on automating the business of healthcare, not on automating care delivery. The focus has been on generating and submitting billing, and generating documentation to justify billing. Supporting chronic disease management, prevention, or wellness promotion take a side seat if not a backseat. As the healthcare industry transitions to value-based payment, the focus has begun to change. As the healthcare system, we should focus on the capabilities that providers and hospitals need to support effective and efficient care delivery. From those capabilities, we can define the roles, workflows, data, and technology needed to support practices and hospitals in achieving those capabilities. Instead of adopting a standard, acquiring a piece of technology, or sending a message, by loosely coupling to the capabilities, we have a metric to determine whether we are successful.


If we do not focus on the people, workflow, data, and technology, but instead only focus on adopting health IT, we will struggle to achieve the “Triple Aim™,” to see any return on investment, or to improve the satisfaction of providers and patients. At this time, a real opportunity exists to further our understanding of the optimization of sociotechnical systems in healthcare and to create resources to deploy those learnings into the healthcare system. The opportunity requires us to expand our focus to the people, workflow, information, AND technology.


What questions do you have about healthcare IT?


Steven E. Waldren, MD MS, is the director, Alliance for eHealth Innovation at the American Academy of Family Physicians





Read more >

Will the Invincible Buckeyes Team from OSU and OSC Prove to be Invincible?

Mike Bernhardt is the Community Evangelist for Intel’s Technical Computing Group


Karen Tomko, Scientific Applications Group Manager at the Ohio Supercomputer Center (OSC), has assembled a team of fellow Buckeyes to attempt the Intel Parallel Universe Computing Challenge (PUCC) at SC14 in November.


We asked Karen a few questions about her team, called the Invincible Buckeyes (IB), and their proposed participation in the PUCC.


The 2014 Invincible Buckeyes (IB) team includes (from l to r) Khaled Hamidouche, a post-doctoral researcher at The Ohio State University (OSU); Raghunath Raja, Ph.D student (CS) at OSU; team captain Karen Tomko; and Akshay Venkatesh, Ph.D student (CS) at OSU. Not pictured is Hari Subramoni, a senior research associate at OSU


Q: What was the most exciting thing about last year’s PUCC?

A: Taking a piece of code from sequential to running in parallel on the Xeon Phi in 15 minutes, in a very close performance battle against the Illinois team was a lot of fun.


Q: How will your team prepare for this year’s challenge?

A: We’ll do our homework for the trivia, brush up on the parallel constructs, look at some Fortran codes, and make sure we have at least one vi user on the team.


Q: What would you suggest to other teams who are considering participation?

A: First I’d say, if you are considering it then sign up. It’s a fun break from the many obligations and talks at SC. When you’re in a match don’t over think, the time goes very quick. Also, watch out for the ‘Invincible Buckeyes’!


Q: SC14 is using the theme “HPC Matters” for the conference. Can you explain why “HPC Matters” to you?

A: HPC systems allow scientists and engineers to tackle grand challenge problems in their respective domains and make significant contributions to their fields. It has enabled innumerous discoveries in the fields of astro-physics, earthquake analysis, weather prediction, nanoscience modeling, multi-scale and multi-physics modeling, biological computations, and computational fluid dynamics, to name a few. Being able to contribute directly/indirectly to these discoveries by means of the research we do matters a lot to our team.

Read more >

IoT and Big Data Analytics Pilot Bring Big Cost Savings to Intel Manufacturing

As billions of new and legacy devices become connected in the Internet of Things (IoT), manufacturers need solutions that make sense of disparate data sources and deliver a holistic picture of factory health to solve key challenges and generate new … Read more >

The post IoT and Big Data Analytics Pilot Bring Big Cost Savings to Intel Manufacturing appeared first on IoT@Intel.

Read more >

IT Accelerating Business Innovation Through Product Design

For the Product Development IT team within Intel IT that I am a part of, these have been our recent mandates. We’ve been tasked with accelerating the development of Intel’s key System on Chip (SoC) platforms. We’ve been asked to be a key enabler of Intel’s growing software and services business. And we’ve been recognized as a model for employee engagement and cross-functional collaboration.


Much of this is new.


We’ve always provided the technology resources that facilitate the creation of world-class products and services. But the measures of success have changed. Availability and uptime are no longer enough. Today, it’s all about acceleration and transformation.


Accelerating at the Speed of Business


In many ways, we have become a gas pedal for Intel product development. We are helping our engineers design and deliver products to market faster than ever before. We are bringing globally distributed teams closer together with better communication and collaboration capabilities. And we are introducing new techniques and tools that are transforming the very nature of product design.


Dan McKeon, Vice President of Intel IT and General Manager of Silicon, Software and Services Group at Intel, recently wrote about the ways we are accelerating and transforming product design in the Intel IT Business Review.


The IT Product Development team, under Dan’s leadership, has enthusiastically embraced this new role. It allows us to be both a high-value partner and a consultant for the design teams we support at Intel. We now have a much better understanding of their goals, their pain points, and their critical paths to success—down to each job and workload. And we’ve aligned our efforts and priorities accordingly.


The results have been clear. We’ve successfully shaved weeks and months off of high-priority design cycles. And we continue to align with development teams to further accelerate and transform their design and delivery processes. Our goal in 2014 is to accelerate the Intel SoC design group’s development schedule by 12 weeks or more. We are sharing our best practices as we go, so please keep in touch.


Get the latest from Dan’s team on IT product development for faster time to market, download the Intel IT Business Review mobile app.

Follow the conversation on Twitter: hashtag #IntelIT

Read more >