Recent Blog Posts

NVM Express: Windows driver support decoded

NVMexpress.gifmicrosoft.png


NVMe Drivers and SSD support in Windows

Microsoft enabled native, support for NVM Express (NVMe) in Windows 8.1 and Windows Server 2012 R2 by way of inbox drivers, and subsequent versions of each OS family are expected to have native support moving forward.  Additionally, native support for NVMe in Windows 7 and Windows Server 2008 R2 was added via product updates. 

 

Intel also provides an NVMe driver for Microsoft OS’s that releases with each version of our NVMe hardware products internally and using Microsoft’s WHCK.  The list of supported OS’s is the same as those above (for both 32-bit and 64-bit versions), along with Windows 8 and Windows Server 2012 (R2). The Intel NVMe driver supports only Intel SSDs and is required for power users or server administrators who plan to use the Intel® Solid-State Drive Data Center Tool to perform administrative commands on an NVMe SSD (e.g. firmware updates).  The Intel driver is intended to provide the best overall experience in terms of performance and supportability, it is strongly recommended.

 

 

Download Links by Operating Systems

 

NVMe Drivers for Windows

Operating System

Intel Driver Download

Microsoft Driver Download

Windows 7

intel.com

microsoft.com

Windows Server 2008 R2

intel.com

microsoft.com

Windows 8

intel.com

supported by upgrade to Windows 8.1

Windows Server 2012

intel.com

supported by upgraded to Windows Server 2012 R2

Windows 8.1

intel.com

N/A (inbox driver)

Windows Server 2012 R2

intel.com

N/A (inbox driver)

 

 

Other Links of Interest

 

Link

Details

Intel® Solid-State Drive Data Center Tool

The Intel® Solid-State Drive Data Center Tool (Intel SSD DCT) is a drive management tool for Intel SSD Data Center Family of products.

Intel® SSD Data Center Family Overview

Provides access to more information on Intel’s NVMe PCIe SSDs.

nvmexpress.org

More information on what NVMe is, why you should consider using it, and news/upcoming events.

 

 

Other blogs by Operating Systems with NVM Express driver information:

NVM Express: Linux driver support decoded

The Skinny on NVM Express and ESXi


Read more >

Empathizing with Teachers and Learners Leads to the Read With Me App

Teaching is tough work. In one design thinking project that I mentioned in a previous blog post, empathy for teachers and students led to development of the Read With Me app (available now on Chrome and select Android devices) co-developed … Read more >

The post Empathizing with Teachers and Learners Leads to the Read With Me App appeared first on Intel Software and Services.

Read more >

Why Choose the Mini PC? Part 2

Retail and finance industries turn to Mini PCs for high performance, compact computing power

 

Bookstore.png

Whether it’s tucked away on a bookshelf, hidden behind a fitting room mirror or mounted on a digital display, Intel technology-based solutions featuring the Mini PC are helping to power industries as varied as the retail and the financial sectors. Thanks to their energy efficiency, compact design and high performance computing power, these tiny form factors bring full-sized PC power to the smallest of spaces. Here are some real-world examples of the endless possibilities with Mini PCs:

 

Mini PCs as Part of An Overall Technology Solution for Retail

 

One of my favorite Mini PC success stories is that of Galleria Rizzoli in Milan, Italy. Galleria Rizzoli saw the impact of digital book sales firsthand, and decided to respond with a complete digital overall of its operations.

 

With the help of Intel technology, Galleria Rizzoli launched a pilot program that gave their store a complete technology makeover. Mini PCs powered new in-store digital signage and seven new in-store customer kiosks. Mini PCs replaced bulky desktop towers, freeing up valuable store space. Thanks to the technology makeover, sales increased 40 percent.

 

Galleria Rizzoli is a great example of how Mini PCs can enhance the user experience to help drive sales.

 

Overall, it’s a winning solution for Intel, for Rizzoli, and for consumers who might be looking to quickly find the perfect kids’ book for a boy who likes to play with trucks. Read the full story of how Mini PCs modernized the bookstore.

 

Embedded Mini PCs Enable Next-Gen Vending Machines

 

Whether you’re grabbing a quick snack at the office or simply refueling at the gas station, vending machines today are operating on a complex system of motherboards, dispensing tanks, and printing and credit card machines. Many new OEMs are currently working on consolidating all these disparate parts into one Mini PC solution.

 

Mini PCs in the Fitting Room

 

Instead of treating the fitting room like a revolving door, imagine being able to tap a screen to request a different size or color. Some retailers are exploring the idea of using the Mini PC to power touch-screen consoles in fitting rooms to provide instant inventory access to customers while also recommending referential products for purchase.

 

Man-Gives-Card-To-Woman-At-Hotel-Desk.png

National Grocery Chains Power POS with Mini PCs

 

The days of the bulky cash register have given way to more compact Mini PC-powered POS systems in grocery stores as well. Not only do Mini PCs leave a smaller footprint in tight cashier stalls, they also provide the high performance computing power necessary to ring up multiple items in quick succession.

 

Hospitality Industry Welcomes Mini PCs


Look inside many hotel business centers and you’ll likely see a row of monitors with Mini PCs tucked neatly behind them. The Mini PC offers a compact solution that won’t slow guests down. And some hotels are exploring the use of Mini PCs in guest rooms attached to the TVs along with concierge-type software to enhance the in-room guest experience.

 

Banks Turn to Mini PCs for Increased Efficiency


A growing number of banks are reaching for Mini PCs, not only for their compact size, but for their energy efficiency and speed. For many clients, a visit to the local bank reveals tellers relying on Mini PCs where desktop towers once stood. Mini PCs free up valuable desk space, offer compact security, and integrate with legacy systems.

 

Day Traders Turn to Mini PCs for Quick Calculations

 

For day traders, Mini PCs featuring solid-state-drives (SSDs) are the desktop PCs of choice. While traditional hard disk drives in PCs and laptops are fairly inexpensive, they are also slow. SSDs offer greater capacity, are considered more reliable, and enable faster access to data, which is critical to an industry where seconds matter.

 

Where have you seen the Mini PC in use? Join the conversation using #IntelDesktop or view our other posts in the Desktop World Series and rediscover the desktop.

 

To read part 1, click here: Why Choose the Mini PC? Part 1

Read more >

Future of IoT: 5 Questions with Technology Futurist Angela Orebaugh

We sat down with technology futurist Angela Orebaugh recently to chat about emerging Internet of Things (IoT) trends. In 2011, Angela was named Booz Allen Hamilton’s first Cybersecurity Fellow, a position reserved for the firm’s most notable experts in their … Read more >

The post Future of IoT: 5 Questions with Technology Futurist Angela Orebaugh appeared first on IoT@Intel.

Read more >

Population Health Management Best Practices for Today and Tomorrow’s Healthcare System

By Justin Barnes and Mason Beard

 

The transition to value-based care is not an easy one. Organizations will face numerous challenges on their journey towards population health management.

 

We believe there are five key elements and best practices to consider when transitioning from volume to value-based care:  managing multiple quality programs; supporting both employed and affiliated physicians and effectively managing your network and referrals; managing organizational risk and utilization patterns; implementing care management programs; and ensuring success with value-based reimbursement.

 

When considering the best way to proactively and concurrently manage multiple quality programs, such as pay for performance, accountable care and/ or patient-centered medical home initiatives, you must rally your organization around a wide variety of outcomes-based programs. This requires a solution that supports quality program automation. Your platform must aggregate data from disparate sources, analyze that data through the lens of a program’s specific measures, and effectively enable the actions required to make improvements. Although this is a highly technical and complicated process, when done well it enables care teams to utilize real-time dashboards to monitor progress and identify focus areas for improving outcomes.

 

In order to provide support to both employed and affiliated physicians, and effectively manage your network and referrals, an organization must demonstrate its value to healthcare providers. Organizations that do this successfully are best positioned to engage and align with their healthcare providers. This means providing community-wide solutions for value-based care delivery. This must include technology and innovation, transformation services and support, care coordination processes, referral management, and savvy representation with employers and payers based on experience and accurate insight into population health management as well as risk.

 

To effectively manage organization risk and utilization patterns, it is imperative to optimize episodic and longitudinal risk, which requires the application of vetted algorithms to your patient populations using a high quality data set. In order to understand the difference in risk and utilization patterns you need to aggregate and normalize data from various clinical and administrative sources, and then ensure that the data quality is as high as possible. You must own your data and processes to be successful. And importantly, do not rely entirely on data received from payers.

 

It is also important to consider the implementation of care management programs to improve individual patient outcomes. More and more organizations are creating care management initiatives for improving outcomes during transitions of care and for complicated, chronically ill patients. These initiatives can be very effective.  It is important to leverage technology, innovation and processes across the continuum of care, while encompassing both primary and specialty care providers and care teams in the workflows. Accurate insight into your risk helps define your areas of focus. A scheduled, trended outcomes report can effectively identify what’s working and where areas of improvement remain.

 

Finally, your organization can ensure success with value-based reimbursement when the transition is navigated correctly. The shift to value-based reimbursement is a critical and complicated transformation—oftentimes a reinvention—of an organization. Ultimately, it boils down to leadership, experience, technology and commitment. The key to success is working with team members, consultants and vendor partners who understand the myriad details and programs, and who thrive in a culture of communication, collaboration, execution and accountability.

 

Whether it’s PCMH or PCMH-N, PQRS or GPRO, CIN or ACO, PFP or DSRIP, TCM or CCM, HEDIS or NQF, ACG’s or HCC’s, care management or provider engagement, governance or network tiering, or payer or employer contracting, you can find partners with the right experience to match your organizations unique needs. Because much is at stake, it is necessary to ensure that you partner with the very best to help navigate your transition to value-based care.

 

Justin Barnes is a corporate, board and policy advisor who regularly appears in journals, magazines and broadcast media outlets relating to national leadership of healthcare and health IT. Barnes is also host of the weekly syndicated radio show, “This Just In.”

 

Mason Beard is Co-Founder and Chief Product Officer for Wellcentive. Wellcentive delivers population health solutions that enable healthcare organizations to focus on high quality care, while maximizing revenue and transforming to support value-based models.

Read more >

Make Your Data Centre Think for Itself

Two-People-Walking-In-A-Data-Center.pngWouldn’t it be nice if your data centre could think for itself and save you some headaches? In my last post, I outlined the principle of the orchestration layer in the software-defined infrastructure (SDI), and how it’s like the brain controlling your data centre organism. Today, I’m digging into this idea in a bit more detail, looking at the neurons that pass information from the hands and feet to the brain, as it were. In data centre terms, this means the telemetry that connects your resources to the orchestration layer.

 

Even the most carefully designed orchestration layer will only be effective if it can get constant and up-to-date, contextual information about the resources it is controlling: How are they performing? How much power are they using? What are their utilisation levels and are there any bottlenecks due to latency issues? And so on and so forth. Telemetry provides this real-time visibility by tracking resources’ physical attributes and sending the intelligence back to the orchestration software.

 

Let me give you an example of this in practice. I call it the ‘noisy neighbour’ scenario. Imagine we have four virtual machines (VMs) running on one server, but one of them is hogging a lot of the resource and this is impacting the performance of the other three. Intel’s cache monitoring telemetry on the server can report this right back to the orchestration layer, which will then migrate the noisy VM to a new server, leaving the others in peace. This is real-time situational feedback informing how the whole organism works. In other words, it’s the Watch, Decide, Act, Learn cycle that I described in my previous blog post – doesn’t it all fit together nicely?

 

Lessons from History…

 

Of course, successful telemetry relies on having the right hardware to transmit it. Just think about another data centre game changer of the recent past – virtualisation. Back in the early 2000s, demand for this technology was growing fast, but the software-only solutions available put tremendous overheard demand on the hardware behind them – not an efficient way to go about it. So, we at Intel helped build in more efficiencies with solutions like Intel® Virtualization Technology, more memory, addressability and huge performance gains. Today, we’re applying that same logic to remove SDI bottlenecks. Another example is Intel® Intelligent Power Node Manager, a hardware engine that works with management software to monitor and control power usage at the server, rack and row level, allowing you to set the usage policies for each.

However, we’re not just adding telemetry capabilities at the chip level and boosting hardware performance, but also investing in high-bandwidth networking and storage technologies.

 

….Applied to Today’s Data Centres

 

With technologies already in the market to enable telemetry within the SDI, there are a number of real-life use cases we can look to for examples of how it can help drive time, cost and labour out of the data centre. Here are some examples of how end-user organizations are using Intelligent Power Node Manager to do this:

 

 

Orchestration-Layer-Explanation-For-Data-Center.png

Another potential use case for the technology is to reduce usage of intelligent power strips, or you could throttle back on server performance and extend the life of your uninterruptable power supply (UPS) in the advent of a power outage, helping lower the risk of a service down time – something no business can afford.

 

So, once you’ve got your data centre functioning like a highly evolved neural network, what’s next? Well, as data centre technologies continue to develop, the extent to which you can build agility into your infrastructure is growing all the time. In my next blog, I’m going to look into the future a bit and explore how silicon photonics can help you create composable architectures that will enable you to build and reconfigure resources on the fly.

 

To pass the time until then, I’d love to hear from any of you that have already started using telemetry to inform your orchestration layer. What impact has it had for you, and can you share any tips for those just starting out?

 

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance

 

You can find my first and second blogs on data centres here:   

Is Your Data Centre Ready for the IoT Age?

Have You Got Your Blueprints Ready?

Are You Smarter than a Data Centre?

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

 

*Other names and brands may be claimed as the property of others.

Read more >

Server Refresh Can Reduce Total Cost of Ownership

Snackable-MoreBangforYourBuck.pngMore bang for your buck. Essentially that is the driving force behind my team in Intel IT. Our IT department is on a tight budget, just like most enterprise IT departments. Therefore, return on investment and total cost of ownership are important considerations for deciding when to upgrade the servers that run our silicon design workloads. As a principal engineer in infrastructure engineering, I direct the comparison of the various models of each new generation of Intel® CPU to those of previous generations of processors. (We may re-evaluate the TCO of particular models between generations, if price points significantly change.) We evaluate all the Intel® Xeon® processor families – Intel Xeon processor E3 family, Intel Xeon processor E5 family, and Intel Xeon processor E7 family – each of which have different niches in Intel’s silicon design efforts.

 

We use industry benchmarks and actual electronic design automation (EDA) workloads in our evaluations, which go beyond performance to address TCO – we include throughput, form factor (density), energy efficiency, cost, software licensing costs, and other factors. In many cases over the years, one of the models might turn out better in terms of price/watt, but performance is slower, or the software licensing fees are triple those for a different model.

 

In silicon design, back-end design jobs are time critical and require servers with considerable processing power, large memory capacity, and memory bandwidth. For these types of jobs, the bottleneck has historically been memory, not CPU cycles; with more memory, we can run more jobs in parallel. The Intel Xeon processor E7-8800 v3 product family offers new features that can increase EDA throughput, including up to 20% more cores than the previous generation and DDR4 memory support for higher memory bandwidth. A server based on the Intel Xeon processor E7-8800 v3 can take either DDR3 (thereby protecting existing investment) or DDR4 DIMMs – and supports memory capacity up to 6 TB per 4-socket server (with 64 GB DIMMs) to deliver fast turnaround time for large silicon design jobs.

 

We recently completed an evaluation of the Intel Xeon processor E7-8800 v3 product family, as documented in our recent brief. According to our test results, the Intel Xeon processor E7 v3-based server delivers excellent gains in performance and supports larger models, faster iterations, and greater throughput than was possible with the previous generation of the processor. These improvements can accelerate long-running silicon design jobs and shorten the time required to bring new silicon design to market. These improvements can also reduce data center footprint and help control operational and software licensing costs by achieving greater throughput using fewer systems than were necessary with previous generations of processors.

 

Our tests used a large multi-threaded EDA application operating on current Intel® silicon design data sets. The result shows an Intel Xeon processor E7-8890 v3-based server completed a complex silicon design workload 1.18x faster than the previous-generation Intel Xeon processor E7-4890 v2-based server and 17.04x faster than a server based on Intel® Xeon® processor 7100 series (Intel Xeon processor 7140M).

 

The Intel Xeon processor E7-8800 v3 product family also supports the Intel® Advanced Vector Extensions 2 (Intel® AVX2) instruction set. Benefits of Intel AVX2 include doubling the number of FLOPS (floating-point operations per second) per clock cycle, 256-bit integer instructions, floating-point fused multiply-add instructions, and gather operations. While our silicon design jobs do not currently use AVX2 – mostly because the design cycles can take over a year to complete and during that time we cannot modify the plan of record (POR) EDA tools and infrastructure for those servers – we anticipate that Intel AVX2 can provide a performance boost for many technical applications.

 

I’d like to hear from other IT professionals – are you considering refreshing? If you’ve already refreshed, can you share your observed benefits and concrete ROI? What best-known methods have you developed and what are some remaining pain points? If you have any questions, I’d be happy to answer them and pass on our own best practices in deploying these servers. Please share your thoughts and insights with me – and your other IT colleagues – by leaving a comment below. Join our conversation here in the IT Peer Network.

Read more >

Chromium Command-Line Options for Crosswalk Builds with the Intel XDK

If you are an HTML5 web app developer, something you probably take for granted with that Chrome browser on your desktop are the handy Chromium Command-Line Switches that can be used to enable experimental features in the browser or control useful debugging and … Read more >

The post Chromium Command-Line Options for Crosswalk Builds with the Intel XDK appeared first on Intel Software and Services.

Read more >

Looking to the Future: Smart Healthcare with Big Data – HLRS

HLRS-Healthcare-Techonolgy.png

The NHS is under unbelievable pressure to do more with less; according to the latest research from the Kings Fund, the NHS budget increased by an average of just 0.8 per cent per year in real terms since 2010.

 

Clearly, there is a need for smart ways to improve healthcare around the world. One team of researchers at The University of Stuttgart is using cutting-edge technology and big data to simulate the longest and strongest bone in the human body — the femur — to improve implants.

 

People-Looking-At-Images-On-Screens.png

Medical Marvels

 

For three years, this research team has been running simulations of the types of realistic forces that the thigh bone undergoes on a daily basis for different body types and at a variety of activity levels to try and inform doctors what is needed to create much better implants for patients with severe fractures or hip deterioration. Successful implants can make a significant impact on the wearer’s quality of life, so the lighter and more durable they are the better.

 

Femoral fractures are pretty common, with around 70,000 hip fractures taking place in the UK each year, which is estimated to cost up to £2 billion. So, ascertaining better materials for bone implants that can allow longer wear and better mobility will solve a real need.

 

However, trying to achieve simulation results for a fractured bone requires a huge amount of data. Bone is not a compact structure, but is like a calcified sponge. Such a non-homogenous non-uniform material behaves in different ways under different stresses for different people. This means that the team must collect hundreds of thousands of infinitesimally small scans from genuine bone samples to learn how different femurs are structured. The incredible detail and high resolution provided by high-performance machines powered by the Intel® Xeon® Processor E5-2680 v3 enables them to replicate full femur simulations with this exact material data.

 

Such a level of intricacy cannot be done on a normal cluster. In the University of Stuttgart research team’s experience, one tiny part of the femoral head — a cube of only 0.6mm2 — generates approximately 90,000 samples and each of these samples requires at least six Finite-Element simulations to get the field of anisotropic material data needed to cover the full femoral head. To carry out this large number of simulations they definitely need the super computer! To do this in a commercial way you’d need thousands of patients, but with one supercomputer this team can predict average bone remodelling and develop reliable material models for accurate implant simulations. This is real innovation.

 

High-Performance Computing for Healthcare

 

The High Performance Computing Center in Stuttgart (HLRS) is the institution that makes all this possible. One of just three large supercomputer ‘tier 0’ sites in Germany, it recently upgraded to the Intel Xeon Processor E5-2680 v3, which according to internal tests delivers four times the performance of its previous supercomputer ii. This is great for Stuttgart University, as its computing center now has four times the storage space. ii Research like this requires intensive data processing and accurate analysis, so significant computing capacity is crucial.

 

Person-Using-Interactive-Computing.png

This new system enables breakthroughs that would be otherwise impossible. For more on HLRS’s cutting edge supercomputing offering, click here.

 

I’d love to get your thoughts on the healthcare innovation enabled by the Intel HPC technologies being used by HLRS and its researchers, so please leave a comment below — I’m happy to answer questions you may have too.

 

To continue this conversation on Twitter, please follow me at @Jane_Cooltech.

 

Join the debate in the Intel Health and Life Sciences Community, and look for Thomas Kellerer’s HLRS blog next week!

 

For other blogs on how Intel technology is used in the healthcare space, check out these blogs.

 

i ’Storage and Indexing of Fine Grain, Large Scale Data Sets’ by Ralf Schneider, in Michael M. Resch et. al, Sustained Simulation Performance 2013, Springer International Publishing, 2013, S. 89–104, isbn: 978-3-319-01438-8. doi: 10.1007/978-3-319-01439-5_7

 

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance

 

*Other names and brands may be claimed as the property of others.

Photo Credit: copyright Boris Lehner for HLRS

Read more >