Recent Blog Posts

Devising the Theory of Economic Incentives for Cybersecurity

Technologists pursuing interesting and elegant solutions in cybersecurity frequently lack the knowledge of economics to anticipate the influences of other technologies, existing infrastructure, and technology evolution on the potential  success of the technologies they are creating.  Viable solutions may not … Read more >

The post Devising the Theory of Economic Incentives for Cybersecurity appeared first on Policy@Intel.

Read more >

Podcast: Mike Bates Discusses Distributed Energy Resources and Changing Business Models for Utilities

In the above podcast, Mike Bates, Director of Energy at Intel, discusses how energy and utilities’ business models are changing and how distributed energy resources will be tied into the energy markets. Mike also shares some insights into the exciting trends … Read more >

The post Podcast: Mike Bates Discusses Distributed Energy Resources and Changing Business Models for Utilities appeared first on Grid Insights by Intel.

Read more >

Intel Commends the Advent of Federal Autonomous Vehicle Guidelines

Marjorie Dickman, Global Director & Managing Counsel of Internet of Things Policy at Intel Corporation As a global technology leader in advancing the future of fully autonomous driving, Intel appreciates the U.S. Department of Transportation’s announcement today of guidelines for … Read more >

The post Intel Commends the Advent of Federal Autonomous Vehicle Guidelines appeared first on Policy@Intel.

Read more >

Driving Cloud Gaming Into the Mainstream

A quick glance through all 2016 cloud gaming related news and articles is enough to realize how fundamentally divided game pundits are towards the streaming delivery model.

On one side are the bulls. They espouse the sheer beauty of the concept: the world tends to digitalization, globalization, simplicity, rapidity, cross-screen and multi-device gaming, services vs. products, subscription models replacing traditional sales. Consumers have already switched to Spotify*, Netflix*, and the like, so it seems only natural to add a streaming games subscription and this shift is inevitable.

On the other hand are the bears, mainly influenced by the turbulent history of some past ventures, the reliability of a user experience closely linked to a steady Internet connection and, especially, to the cost of the hardware involved. Cost efficiency is the Mother of All Battles, as we, in PlayGiga, are well aware.

Cloud gaming demands custom servers equipped with multi GPU cloud graphic cards, fed, in turn, by high-end CPUs. On top of that, high-speed 10 Gb interfaces are required to enable the necessary number of concurrent users per server. The complexity of this solution is probably the reason why there´s only a new generation of servers every 2 or 3 years, and why these servers cannot be easily leveraged for other purposes. Unfortunately, customization reduces monetization opportunities for the 95% of the time servers are underutilized when not being accessed by online gamers.

And here’s where Intel® HD Graphics technology, featured in the Intel® Xeon® processor E3 family, comes to the rescue. In PlayGiga we are testing the use of general purpose and high density servers with integrated mainstream GPUs, like the Intel® Iris™ Pro graphics P580 available in the brand new Intel® Xeon® processor E3-1500 v5 product family. We believe this is the right approach to make cloud gaming scalable while delivering cutting-edge performance per watt and more efficient rendering in the cloud. Plus, through server optimization, we are able to vastly reduce the amount of server space and infrastructure required in our datacenter.

The concept is simple enough: mainstream cloud gaming access is achieved through mainstream hardware. The inflection point will be reached once we start seeing the next generation of cloud-optimized video games, which will drive efficiency to an unheard-of level and, with the scalability and cost-efficiency guaranteed… nothing’s gonna stop us now!

PlayGiga is a technological and editorial platform born in 2013 with the aim to lead the Cloud Gaming arena by developing a unique worldwide service based on a proprietary hybrid sandboxed virtualization technology. If you’re interested in learning more about PlayGiga´s cloud gaming solution, please contact us in

Luisfer Fernandez – PlayGiga CTO

Ivan Mayo – R&D Manager, PlayGiga

Read more >

Trending on IoT: Our Most Popular Developer Stories | September

Detecting Earthquakes

Code Sample: Earthquake Detector in JavaScript* for an Intel® Joule™ Development Platform

This code sample shows you how to keep track of seismic data using a development platform and an assortment of extensible sensors.

Intel® Joule™

Introducing the Intel® Joule™ Module

The Intel® Joule™ module offers a robust software stack. It includes a pre-installed, Linux*-based OS that is tailored for IoT, based on the Yocto* Project, with support for sensor libraries in the Intel® IoT Developer Kit.

IoT Enabled Concussion Detection System

IoT-Enabled Concussion Detection System

See how 15-year-old Riddick developed an IoT system within a football helmet that alerts a coach if a player is injured.

Build a Mobile App with Blynk*

Use Blynk* to Build a Mobile App for Any IoT Product in Minutes

Blynk* is an IoT platform that provides developers with a drag-and-drop interface that works on iOS* and Android*, and with multiple programming languages.

Warehouse scanner

Warehouse Scanner and Artificial Learning Database Demo

Use an Intel® Joule™ module with an artificial learning database on a Node.js* platform to quickly identify and classify different items for warehouse inventory purposes.

Toyhub Robotics Demo

Toyhub* Robotics Demo Using Cylon.JS* Gobot and an Intel® IoT Gateway

Quickly develop an IoT and robotics project using Toyhub*.

 Intel® Quark™ Microcontroller

 Accelerometer Tutorial Using Intel® Quark™ Microcontroller D2000

Utilize multiple samples from Intel® System Studio for Microcontrollers  to get you up to speed with basic functionality and become familiar with the Intel® Quark™ Microcontroller Software Interface (Intel® QMSI).

Using Libraries

Using Libraries for your IoT Project

Learn how to use two essential libraries—Libmraa* and UPM—for your IoT projects


Using Universal Asynchronous Receivers/Transmitter (UART) Interfaces

Program UART interfaces using the open-source Intel® Quark™ Microcontroller Software Interface (Intel® QMSI) board support package that comes with the Intel® Quark™ microcontroller D2000.

Add a Button

Adding a Button and LED to your Intel® Quark™ Microcontroller Developer Kit D2000

Follow this tutorial to learn the basics of the Intel® System Studio for Microcontrollers by using the LED and user switch present on an Intel® Quark™ microcontroller D2000.

Our team of software developers, experts, partners and enthusiasts have continuously shared innovative and exciting ideas in the Intel® IoT Developer Zone. This month we’ve compiled a list of our most popular IoT stories to guide you through the latest projects in the IoT space. Miss last month? Go to August IoT to catch up.


Read more >

Open source software is leading us into a technical utopia

It’s unlike anything else in any industry. Open source software can turn individual developers into influencers overnight. Free enterprises from decades-long licensing dependencies. Turn startups into publicly traded companies in no time. But it’s not new. The practice of openly … Read more >

The post Open source software is leading us into a technical utopia appeared first on Intel Software and Services.

Read more >

Open Source Software is Leading Us Into a Technical Utopia



It’s unlike anything else in any industry.

Open source software can turn individual developers into influencers overnight. Free enterprises from decades-long licensing dependencies. Turn startups into publicly traded companies in no time.

But it’s not new.

The practice of openly sharing code, allowing others to advance it with incremental contributions, and monetizing the deployment and/or support of it has actually been around for a while. Its recent popularity has simply pulled back the curtain to reveal its emergence in all variants of software stacks, democratizing the innovation around operating systems, applications, mobile platforms, services, languages, frameworks and tools.

There’s a utopian vibe to open source activity—a collective momentum that works for the betterment of all. Individuals and contributors can use or steer the project to fulfill their needs; meaning what you put in benefits you and all others.

And it doesn’t matter who you’re working with.

I’ve talked at meetups where developers and data scientists sit side by side, advancing the same project despite the fact that they work for competing companies. Taking part in an open source project really means everyone is in it together.

It’s a philosophy that is capturing the mindshare of increasing numbers of companies, including Intel’s long-term partner Microsoft, who recently released .NET Core 1.0, an open source software development platform that runs on both Linux and Mac OSX.

This was welcome news to the dozens of data scientists and developers I engaged with at the recent Spark Summit West event (I was there demo’ing the Trusted Analytics Platform (TAP) in a booth that shared its border with the Microsoft booth). The general consensus: “Great news!”

Obviously that’s just one example. If you look hard enough at many successful technology companies, whether built on software or hardware, you’ll likely find some pedigree of open source activity.

At another recent event, OSCON, I had the opportunity to interview with O’Reilly’s Mike Hendrickson. Here, I was asked about Intel’s involvement in open source, including our significant contributions to many analytics-related projects and incubation of TAP. You can watch the full video below:

Company involvement in open source can be a point of great pride for those that take part. Large companies can tweak elements of open source to allow users to maximize their use of code and the project or library’s performance on underlying hardware. It’s a way to gently reach into what everyone wants to build on—and provide a foundation for even greater success.

I’m looking deeper into various examples of open source benefits and look forward to posting my findings. For a more in depth look at Intel’s other open source projects, please visit

Read more >

Meet the Future of Driving: High Performance In-Vehicle Compute with Headroom to Grow

It’s truly an exciting time for in-vehicle computing. What started with parking assist and rearview cameras has evolved into lane departure warning and dynamic cruise control, requiring more intelligence and compute in the vehicle. Each step toward the vision of … Read more >

The post Meet the Future of Driving: High Performance In-Vehicle Compute with Headroom to Grow appeared first on IoT@Intel.

Read more >

America’s Path Progresses to a National “Internet of Things” Strategy

By Marjorie Dickman, Global Director & Managing Counsel, IoT Policy at Intel Corporation In nearly unanimous vote of 367-4, the House of Representatives approved a resolution this week calling for the United States to develop a national strategy to advance … Read more >

The post America’s Path Progresses to a National “Internet of Things” Strategy appeared first on Policy@Intel.

Read more >

Deprecating the PCOMMIT Instruction

Executive Summary

The PCOMMIT instruction has been deprecated.  Although it was documented earlier, Intel has dropped it from consideration for future products.  This blog post explains the details behind that decision.


Enabling Persistent Memory Programming                               

In preparation for the emerging persistent memory technologies, like Intel DIMMs based on 3D XPoint™ technology, Intel has defined several new instructions to enable the persistent memory programming model.  First, there are two new optimized cache flushing instructions, CLWB and CLFLUSHOPT.  These instructions are described in the Intel Architecture Instruction Set Extensions Programming Reference and are slated to appear on various platforms, including those supporting the Intel DIMM.  They provide a high performance method to flush stores from the CPU cache to the persistence domain, a term used to describe that portion of a platform’s data path where stores are power-fail safe.

Originally, the set of new instructions included one called PCOMMIT, intended for use on platforms where flushing from the CPU cache was not sufficient to reach the persistence domain.  On those platforms, an additional step using PCOMMIT was required to ensure that stores had passed from memory controller write pending queues to the DIMM, which is the persistence domain on those platforms.

The picture below illustrates the data path taken by a store (MOV) to persistent memory.

mov flow

As shown above, when an application executes a MOV instruction, the store typically ends up in the CPU caches.  Instructions like CLWB can be used to flush the store from the CPU cache.  At that point, the store may spend some amount of time in the write pending queue (WPQ) in the memory controller.  As shown above, the larger dashed box represents the power-fail safe persistence domain on a platform that is designed to flush the WPQ automatically on power-fail or shutdown.  One such platform-level feature to perform this flushing is called Asynchronous DRAM Refresh, or ADR.

When the persistent memory programming model was first designed, there was a concern that ADR was a rarely-available platform feature so the PCOMMIT instruction was added to ensure there was a way to achieve persistence on machines without ADR (platforms where the persistence domain is the smaller dashed box in the picture above).  However, it turns out that platforms planning to support the Intel DIMM are also planning to support ADR, so the need for PCOMMIT is now gone.  The result is a simpler, single programming model where the application need not contain logic for detecting whether PCOMMIT is required.  For this reason, PCOMMIT is being deprecated before ever shipping on an Intel CPU, removing any need to support the instruction in older software since no software could have contained it (the opcode has always produced an invalid opcode exception and will continue to do so).

As shown in the picture above, a platform may still have a way to flush the WPQ (shown as WPQ Flush above).  Unlike the PCOMMIT instruction, this is a kernel-only facility used to flush commands written to DIMM command registers, or used by the kernel in the rare case where it wants to ensure something is immediately flushed to the DIMM.  The application is typically unaware the WPQ Flush mechanism exists.


The Simpler Programming Model

The picture below shows a sample instruction sequence for storing values (10 and 20) to persistent memory locations.


The sequence on the left was required on platforms that did not have the ADR feature to flush the WPQ on power-fail/shutdown.  Since ADR is now a requirement for persistent memory support, the simpler sequence on the right can be used for all platforms.


Operating System and Toolchain Changes   

To prepare for persistent memory programming, some operating systems, compilers, assemblers, and libraries were modified to use the PCOMMIT instruction.  Since the instruction was not guaranteed to exist on a given platform, any software using PCOMMIT would only do so if the appropriate CPUID flag indicated PCOMMIT was supported (the exact flag is CPUID.(EAX=07H, ECX=0H):EBX, bit 22).  Since PCOMMIT is deprecated, that CPUID flag is now reserved to always be zero, rendering any code using PCOMMIT to be dead code that will never be executed.

The harmless dead code can be removed over time, but as of this writing, all known operating systems supporting persistent memory and the Non-Volatile Memory Libraries (NVML) at have already been updated to remove all uses of PCOMMIT.



The programming model for persistent memory on Intel CPUs has been simplified by deprecating the PCOMMIT instruction before its first implementation.  Most software, including the Non-Volatile Memory Libraries at are already updated to reflect this change.


Glossary of Terms

Power-fail Protected Domain


Persistent Domain

When storing to pmem, this is the point along the path taken by the store where the store is considered persistent


(Asynchronous DRAM Refresh)

A platform-level feature where the power supply signals other system components that power-fail is imminent, causing the Write Pending Queues in the memory subsystem to be flushed


An instruction allowing an application to flush-on-demand the memory subsystem Write Pending Queues.  With ADR required, this instruction is no longer necessary and is being deprecated.


(sometimes called TPQ)

Write Pending Queues in the memory subsystem




Instructions that flush lines from the CPU caches.  CLWB and CLFLUSHOPT are recent additions for better pmem performance


The mechanism allowing software to detect what features are supported by a CPU


Read more >