Recent Blog Posts

Build a Path to Deeper Insights

HPC Speaker Series at Stanford Shows How Modern Code Helps Developers Build Code Designed to Take Advantage of Today’s Powerful

From the discovery of the Higgs boson particle at the Large Hadron Collider, to the development of hypersonic vehicles, to the mapping of the human genome, high performance computing (HPC) is changing the world.

And recent advancements in hardware—including multi- and many-core processors, high-bandwidth inter-processor communications fabric, lighting-fast memory, huge caches, and broad support for I/O capabilities—mean that today’s processors have the power to run increasingly demanding datasets, like big data analytics, visualization, machine learning, and more.

The kind of workloads that deliver deep insights that fuel innovation and expand human capabilities.

 

The first step: Get up to speed on modern code

By incorporating parallelism at multiple levels—including vectorization, multithreading, and multi-node optimization—developers can make full advantage of modern hardware capabilities that can power these types of strategic breakthroughs. And by embracing a modern code approach, developers also have the ability to future-proof their code and deliver software that is scalable, portable, and built to last.

To help developers build a modern code approach that takes advantage of today’s powerful hardware, Intel offers tools, libraries, videos, webinars, and recorded and live trainings (many hands-on) as part of the Intel Modern Code Developer Community.

 

Stanford High Performance Computing Center Speaker Series

Our live trainings include practical, hands-on “lunch and learns” at the Stanford High Performance Computer Center (HPC Center), which provides high performance computing resources and services to enable computationally intensive research within the Stanford School of Engineering. 

The HPC Center Lunch and Learn seminars are an opportunity for students and professional developers alike to meet face to face with HPC industry experts and learn about code modernization tools and best practices.

The most recent sessions covered:

Ways to increase Python* performance

“Intel® Distribution for Python: A Scalability Story in Production Environments,” presented by Sergey Maidanov, head of the Intel Distribution for Python* team, covered ways to develop and optimize technical computing programs in the Python language to achieve near-native code performance and to avoid the need to rewrite code.

The session covered a number of Intel high performance libraries and profilers, and described how Intel is extending support for multi-core and vectorization (Single Instruction, Multiple Data) parallelism for the Intel® Distribution for Python.

Case studies covered in the session showed speedups of 100x and more from highly optimized libraries such as NumPy/SciPy, Intel® Data Analytics Acceleration Library (Intel® DAAL), and Scikit-learn*, and illustrated how those scale across multiple cores and multiple nodes. Sergey also covered how Intel® VTune™ Amplifier allows low intrusive profiling of Python and native codes to identify performance hotspots.

Performance tuning for HPC workloads

Intel senior HPC engineer Thanh Phung and Intel VTune HPC lead Dmitry Prohorov discussed the Intel® VTune™ Performance Analyzer and gave a demo of its usage to study HPC workload performance.

The session, “Deep-Dive Performance Characterization and Tuning for HPC Workloads Using Intel VTune Amplifier XE Tool,” described the iterative process needed to optimize workload performance and covered how to use VTune for parallel performance tuning by using code samples to help understand how to increase CPU utilization, memory efficiency, and floating-point unit (FPU) utilization.

Putting vectorization to use

At the session “Guided Code Vectorization with Intel® Advisor XE,” Ryo Asai, a researcher at Colfax International, discussed the usage of the Intel® Advisor optimization tool. He illustrated with an example workload that computes the electric potential in a set of points in 3-D space produced by a group of charged particles; the workload achieved a 16x performance boost after undergoing optimization and vectorization.

In this example, Intel Advisor XE detected a vector dependence, a type conversion, and an inefficient memory access pattern. Ryo showed attendees how to interpret the data presented by Intel Advisor, and how to optimize the application to resolve the issues.

Faster machine learning applications with Intel® Performance Libraries

In “Building Faster Machine Learning Applications with Intel Performance Libraries,” Shaojuan Zhu, an Intel technical consulting engineer, and Sarah Knepper, an Intel software engineer, gave an overview of two performance libraries, the Intel® Math Kernel Library (Intel® MKL) and Intel Data Analytics Acceleration Library (Intel DAAL), which offer optimized building blocks for data analytics and machine learning algorithms. They also introduced the Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN), which offers deep-learning framework optimization with DNN primitives.

Focusing on lower-level primitive functions, Intel MKL is a collection of routines for linear algebra, fast Fourier transform (FFT), vector math, and statistics that can be used to speed up math processing in almost every kind of technical computing application.

Intel DAAL focuses on data applications and provides higher level, canned solutions for supervised and unsupervised learning. This Intel architecture-based data analytics acceleration library of fundamental algorithms covers all machine learning stages, from data management and processing to modeling, and does so for offline, streaming, and distributed analytics usages.

The session also covered the Intel distribution for Python, as well as an upcoming Intel optimized framework for Caffe*.

Next up: Insights on simulation for IoT and wearable devices session

Our next training session, “Simulation for IoT—A Multi-Physics Approach for a Wearable IoT Device,” is October 15. The discussion will center around how, in the race to develop the next blockbuster wearable device, ANSYS* Electronics tools can help ensure that Internet of Things (IoT) products deliver exceptional performance and user experiences. We’ll demonstrate ANSYS’s simulation offering and explore how it can also improve the manufacturing process and durability of smartwatch products.

Learn more and register for this hands-on, face-to-face seminar here.

And discover more about code modernization at the Intel® Software Developer Zone.

To learn more about the HPC Center, visit hpcc.stanford.edu.

Read more >

Building Trust Between Human and Machine on the Road to Automated Vehicles

It’s a tremendously exciting time to work in smart and connected transportation. Automated vehicles are redefining our relationship with automobiles, and the promise of fully automated driving—zero accidents, less pollution, greater mobility, more productivity—is thrilling. Yet there is a growing … Read more >

The post Building Trust Between Human and Machine on the Road to Automated Vehicles appeared first on IoT@Intel.

Read more >

Connectivity for a European Gigabit Society – starting today?

The European Commission proposed in September 14 a “connectivity package” containing, amongst others, a Code to overhauling the current EU telecoms rules and an Action Plan for 5G, aiming to meet Europeans’ growing connectivity needs and boost Europe’s competitiveness. Intel … Read more >

The post Connectivity for a European Gigabit Society – starting today? appeared first on Policy@Intel.

Read more >

“Closing the Talent Gap in Federal IT” — A Model Congressional Hearing!

By Thomas Gann, Director, Global Public Sector, Government and Policy Group, Intel Corporation It’s fair to say that the terms “Inspirational” and “Congressional hearing” rarely go together.  Too often hearings focus on the scandal of the day or on arcane … Read more >

The post “Closing the Talent Gap in Federal IT” — A Model Congressional Hearing! appeared first on Policy@Intel.

Read more >

Mitch moving on…but not too far

For the last seven years (nearly my whole professional career) I’ve had the privilege of working at Intel to serve the needs of game developers. Arguably I have the best job in the whole company. I get to understand our technologies and upcoming roadmap, as well as listen to what game devs need to be more successful then distill that into something that we can deliver to move the needle for game devs and Intel.

Since 2011, I’ve had the opportunity to evolve and grow the Intel® Level Up Game Dev Contest. We’ve partnered with Valve to bring distribution on Steam to winners, with Epic and Unity to broaden awareness and most recently with Razer to get hardware for the winners. Throughout the years I’ve grown great relationships with the judges, partners and some amazing game developers that Intel wouldn’t have had any other way of meeting.

One of the biggest things that a company like Intel can offer indies is access to hardware – that’s come both through hardware seeding and through event rentals/loans via Intel Demo Depot. In 2013 we partnered with Indie Mega Booth to seed about 15 developers with the “PUB-RD” Ultrabook prototypes. That provided a lot of learning on how a big company like Intel can work better with smaller indie devs, which we have utilized going forward including with our seeding of BRIX systems at Steam Dev Days the following January.

Through all these “things I’ve done” the most important thing to me is the friendships I’ve made along the way.

So, why would I give this all up and what am I doing next?

While it’s been absolutely fantastic holding the “dream job” at Intel for so long, I’ve realized that in order to really serve the needs of game devs, I need a much deeper understanding of our products. What goes into our processor roadmap and the systems they’re designed for, how we partner with OEMs to bring those systems to market and how we think about customers who will buy and use them are the things that I intend to learn in my new role.

As I said in the title – I’m not going too far away from game dev. I’m moving to the product team responsible for Performance Notebooks. The folks who use these systems are content creators, digital artists and of course gamers. So I’ll be taking what I’ve learned from all of you game devs and apply to system level innovations to deliver better experiences for end users.

In the near term, I’m extremely excited about is this holiday seasons lineup of VR Ready notebooks. At the beginning of the year, we partnered with HTC and showed the MSI GT72 running a Vive at GDC – at that time there were only a couple systems capable of running VR and they were touted as “laptops with desktop parts in them” – and they were BIG. Shipping now, are laptops as low as $1499 and as light as around 4lbs that are VR capable. For game devs that are currently lugging around a desktop tower, monitor, keyboard, mouse, not to mention their HMD for demos, shows and game jams, the ability to just bring your laptop and HMD is going to be revolutionary.

So while I’m changing roles, I hope to keep the exciting stuff coming to help game developers. I’ll close with a few pictures from the last 7 years.

Figure 1 – My first GDC, 2010

 

Figure 2 – 2014 Steam Dev Days – there was A LOT of Gigabyte BRIX Pro systems seeded in one evening

 

Figure 3 – Level Up Winners Kiosk – featuring winners from Level Up 2013 during GDC 2014 to help launch the 2014 season.

 

Figure 4 – Beat Buddy joins us at the Intel Booth, GDC 2015 and then he sees his father, Wolf Lang on TV!

 

Figure 5 – We did it! Cindi and I after booth breakdown, SIGGRAPH 2015

 

Figure 6 – 2016 PAX West – with Level Up Winner Tim Keenan who shipped Duskers right after submitting this spring

Read more >

Intel® and IoT Solutions World Congress Hackathon

Intel® and IoT Solutions World Congress Hackathon

Explore. Innovate. Create a New World.

Fira Barcelona, Gran Via Venue, Hall 2
Barcelona Spain
October 23-24, 2016

Register Now! 

Beyond its revolutionary impact in business and industries, IoT has enormous potential to respond to critical problems in modern day societies. Participate in this unique IoT Hackathon where you will develop creative and innovative industrial solutions that will have social application.

The Intel® and IoT Solutions World Congress Hackathon will challenge you to create industrial solutions in Health Care, Environment, Transportation and Education.

Event Details
•    Date: October 23-24, 2016
•    Location: Fira Barcelona, Gran Via Venue, Hall 2, Barcelona Spain
•    Participants: 250
•    Prizes: $10,000 in total prizes

Participants will have access to technologies at the edge, the gateway and into the cloud with the Intel® IoT Platform. They will learn how to prototype and deploy industrial IoT solutions powered by the latest Intel developer tools, get hands-on training and experience with Intel® IoT Developer Kits and engage with Intel and Microsoft experts to get answers to their technical questions.

For more information go to Intel® and IoT Solutions World Congress Hackathon.

Space is limited and on a first-come-first-serve basis so sign up today! 

 

Read more >

Demonstrating innovation and safety: drones take flight in the European Parliament

  By Kirsty Macdonald With its spiralling glass and steel atrium and myriad overlaying bridges, the European Parliament might not be the most obvious place to fly a drone but that is exactly what we did at a conference on … Read more >

The post Demonstrating innovation and safety: drones take flight in the European Parliament appeared first on Policy@Intel.

Read more >

Pipeline and the Efficient Chef (Part 1)

Advanced computer concepts for the (not so) common Chef

So far, we’ve talked about the components of a computer system, e.g. the core, memory hierarchy, and program. Let’s now get into the guts (sic) of the core itself, also called the micro-architecture or as I write it, the uArchitecture. Recall that the core is equivalent to a Chef working in his kitchen. (See Not So Common Chef: The Home Kitchen.) So what we’re doing here is looking more closely at how the Chef uses his kitchen (i.e. the components of a computer system) to go through the steps of a recipe.

The most important part of the uArchecture is the pipeline. OK, I agree that there are many other components to the CPU, all of which are fundamental to the operation of the processor. Those components, though vital, play supporting roles to the pipeline. The pipeline is perhaps the heart and soul of the computer as it does the actual execution of instructions, and so, of programs. The pipeline is something that is fundamental to computer processing and has existed arguably since the 1950s, and some would say dates from the 2nd commercially available computer, the UNIVAC I.

A CPU has to do some basic things to execute a program. It has to fetch that instruction from memory (IF – Instruction Fetch), read the instruction (ID – Instruction Decode), get the data the instruction needs (MEM – Memory access), perform the operation specified by the instruction (EX – Execute), and store the result (WB – Write Back). See Figure PHYSICAL for a rough functional layout of a pipeline. For the uninitiated, these are all acronyms and words that are obscure in their meaning. As I will show below, these steps map to how a chef performs one step in a recipe: he reads the step, gathers the ingredients, performs the recipe step, and then puts aside what the step produced in preparation for the next step.

Figure PHYSICAL Pipeline circuitry rough functional layout

 

Before we get more into the CPU pipeline, let’s look at our chef as he once again delves into the art of creation. Instead of looking at everything the chef does, let’s narrow in on just one step of a recipe. We are not looking at the creative processes of our epicure, just the mechanics.

Gazing down from on high into the kitchen, we look at the Chef when he is in the middle of preparing one of his amazing appetizers. Pulling the cookbook closer, he identifies the next step in the recipe, say, step 3. Finding the step, he reads it carefully, noting in particular, (1) what he needs to use from the previous steps, (2) any new ingredients, (3) what he needs to do with those ingredients, and (4) what he needs to do with the result.

 

It turns out that almost everything done by the Chef has an analogy in how the processor performs one instruction. See Table EQUIVALENCE. This is because the Chef, like the uArchitecture pipeline, is the one who actually follows the recipe, doing each step in order.

Pipeline Stage

Done by the CPU/microcode

Done by the Chef

Instruction Fetch (IF)

Get the program’s instruction from memory

Find and read the next step in the recipe

Instruction Decode (ID)

The computer’s pipeline circuitry decodes the instruction into a series of electrical signals that will execute the instruction

Understand* the recipe step

Memory access (MEM)

Get the data from memory that the instruction needs to operate on

Gather the ingredients needed for the step

Execute (EX)

Perform the instruction on the data (e.g. ADD)

Do what the recipe step says

Write Back (WB)

Save the result of the operation in the memory hierarchy. If the result isn’t needed soon, write it back to actual memory.

Put the result to the side so that you can perform the next step in the recipe; if the result isn’t needed immediately, put it back (write back) into the Pantry or refrigerator until it’s needed.

TABLE EQUIVALENCE. Pipeline stages and the cooking equivalent

 

NEXT: Pipeline parallelism

*Yes, indeed. Reading is different from understanding. As an engineer, I can read a section from one of the great existentialist Chefs, but understanding it is a totally different cup of tea.

Read more >

STEM-tastic Results in our Oakland Education Initiative

Posted for Barbara Whye, the Executive Director of Strategy and External Alliances in Intel’s Global Diversity and Inclusion Office – you can follow her on Twitter @steministbarb. The Intel-Oakland Unified School District initiative has produced fantastic results in year one … Read more >

The post STEM-tastic Results in our Oakland Education Initiative appeared first on CSR@Intel.

Read more >

Intel® IoT Developer Kit: Your Gateway to Commercial Cloud Solutions with Microsoft Azure* IoT

Microsoft Azure* collaborates with Intel IoT® Technologies to provide developers with a full set of development tools – from the edge to the cloud. 

For many data-driven businesses, it’s already challenging to keep the software, firmware and configuration of IoT devices up-to-date. That issue is often compounded when equipment and sensors become geographically dispersed. As developers ask IoT to do more than ever before, the need for fully-integrated solutions continues to increase. 

With this need in mind, Intel and Microsoft are collaborating to provide a complete set of development tools for IoT – from the edge to the cloud.  With the Intel® IoT Developer Kit and Microsoft Azure* IoT Cloud Services you will have a better out-of-the-box experience, can access data easier, and rapidly move from prototype to product. Connecting the wide array of commercial devices becomes simpler, allowing access to information that was previously unavailable. Data can now be manipulated with advanced analytics, and utilized in a number of ways. 

Open to All Developers

The design of IoT should be on your terms. With use of the Intel® IoT Developer Kit in conjunction with Microsoft Azure* IoT, you are in control of your design. Whether you specialize in transportation or retail, Microsoft Azure* and the Intel® IoT developer kit help you rapidly prototype your project by enabling developers to use a variety of programming languages, sensor libraries, and devices in the same ecosystem. 

Rapid Prototyping to Product

A difficult hurdle for developers to overcome is the integration of devices in a commercial atmosphere. It is possible to hook up more equipment and drive data to the cloud easier than ever before using an Intel® NUC Gateway and Microsoft Azure* IoT. This integration allows you to view, manage and deliver near real-time data, allowing you to quickly move from prototype to product, as well as send data to your customers. By using Microsoft Azure* services through an Intel® Gateway, you can include stream analytics, machine learning and even notification hubs to trigger specific outcomes.

Best Developer Experience

Without fully-integrated IoT tools, the ability to manage devices and data requires individual connections, from the development platform to the device itself. With use of an Intel® Gateway and Microsoft Azure* IoT, you can centrally manage many devices, resulting in a simplified deployment methodology that saves time and money.

There is no longer a need to develop on a single platform or language, maintain device connections separately or worry about cross-platform compatibility. Using the resources of Microsoft Azure* IoT and the Intel® IoT Developer Program smooths out multi-language issues, making the developer experience simple and easy.

Code Samples to Get Started

Intel and Microsoft* have collaborated to make getting started easier. Between JavaScript*, Java* and C++, there are 28 code samples available through Github and the Intel® Software Developer Zone for IoT Microsoft Azure* page.

Not only can you experiment with the great starter project code, you can also utilize the code basics within your own innovations. With access to reliable code, you will have one less worry, allowing you to focus upon the big picture.

Fast-Track your Project

Versatile and performance optimized, Intel® IoT Developer Kits, which include Intel® Edison and Intel® IoT Gateway technologies, are supported by a variety of programming environments, tools and security options. 

Take the difficulty out of managing IoT by using Microsoft Azure* IoT Cloud Services with Intel® IoT Technologies, allowing you to innovate and rapidly move from prototype to product. You can find a demonstration of a rapid path-to-product IoT solution for retail using cloud data analytics on the Intel® Software Developer Zone for IoT along with numerous other code samples and tutorials. 

Visit Intel’s Microsoft Azure* IoT resource page to download software, code samples and get your project started. 

Read more >

Devising the Theory of Economic Incentives for Cybersecurity

Technologists pursuing interesting and elegant solutions in cybersecurity frequently lack the knowledge of economics to anticipate the influences of other technologies, existing infrastructure, and technology evolution on the potential  success of the technologies they are creating.  Viable solutions may not … Read more >

The post Devising the Theory of Economic Incentives for Cybersecurity appeared first on Policy@Intel.

Read more >