Edge & 5G
Gain crucial understandings of Edge software and 5G concepts with Intel® industry experts
90 Discussions

Harnessing the power of a Heterogeneous Computing Future

Intel_Blog_Admin
Employee
0 0 2,271
The relentless growth of data and expanding diversity of workloads to solve previously unsolvable problems demands architectural diversity. Intel is committed to deliver a heterogeneous mix of scalar, vector, matrix, and spatial architectures deployed in CPU, GPU, specialized accelerator, and FPGA sockets. This gives our customers the ability to use the most appropriate type of compute where it’s needed. Combined with scalable interconnect and a single software abstraction, Intel’s multiple architectures deliver leadership across the compute spectrum to power the data-centric world.

It’s an understatement that the world has changed since we talked about the revolution of compute democratization at the 2019 Intel HPC Developer Conference. In the face of this upheaval, the need for exascale computing and connectivity for all has only become greater. As we connect and work remotely, innovate to fight COVID-19, and unite to end social injustice, advances in computing promise to pave the path forward through today’s problems.

The range of current computing applications is incredibly varied, and rapidly growing, with the widespread use of data analytics, edge computing, and artificial intelligence. As data-centric workloads continue to become larger and more diverse, architectures must evolve to best process them. To enable diverse applications to fully exploit the capabilities of exascale computing systems, Intel sees chips of the future as a heterogeneous mix of Scalar, Vector, Matrix, and Spatial (SVMS) architectures deployed in CPU, GPU, Specialized Accelerator, and FPGA (XPU) sockets. We changed the world with general compute and are continuing our computing journey with a broader set of building blocks to harness the power of heterogeneous computing and capitalize on the limitless potential of the world’s data.

What is SVMS?


Compute architectures can broadly be categorized into Scalar, Vector, Matrix, and Spatial (SVMS).

  • Scalar architecture typically refers to the type of workloads that are optimal on a CPU, where one stream of instructions operates at a given rate typically driven by CPU clock cycles. From system boot and productivity applications to advanced workloads like cryptography and AI, scalar-based CPUs work across a wide range of topographies with consistent, predictable performance.

  • Vector architecture is optimal for workloads which can be decomposed into vectors of instructions or vectors of data elements. GPUs and VPUs deliver vector-based parallel processing to accelerate graphics rendering for gaming, rich media, analytics and deep learning training and inference. By scaling vector architectures from client, data center, and the edge, we can take parallel processing performance from gigaflops to teraflops, petaflops, and exaflops.

  • Matrix architecture derives its name from a common operation typically performed for AI workloads (matrix multiplication). While other architectures are capable of executing matrix multiply code, ASICs have traditionally achieved the highest performance implementing the type of operations typically needed for AI inferencing and training, including matrix multiplication.

  • Spatial architecture is a special architecture usually associated with an FPGA. Here, the data flows through the chip, and the computing operation performed on the data element is based on the physical location of the data in the device and the specific data transformation algorithm that has been programmed into the FPGA.


While SVMS architectures are instantiated in different product types such as CPUs and GPUs, there is often overlap and synergy to what each of these architectures can accomplish. For instance, our new Xe GPU architecture can perform both vector and matrix operations and many of our modern CPUs have been optimized to perform both scalar operations through our general-purpose compute microarchitectures as well as vector and matrix operations through instruction set architectural extensions such as Intel® Advanced Vector Extensions 512 (AVX-512) and Intel® Deep Learning Boost (VNNI instructions) . We believe that the breadth of options provided by Intel enables the widest set of architectural diversity in the industry, to allow custom-fit solutions for the largest number of workloads.

While diversity of hardware creates the potential for choice and unlocks new value through heterogeneous computing, significant barriers to adoption exist that drive up the cost of developing, deploying, and maintaining software across the spectrum of hardware choices. Proprietary and closed software, common in the early stages of technology adoption, eventually creates enterprise risk of single vendor investments. Additionally, applications optimized for a single architectural choice may offer sub-optimal performance or value and require a heavy-lift porting investment for use on innovative hardware. A consistent platform that enables open innovation and choice is needed to maximize the value of SVMS.

oneAPI to deliver performance and productivity across SVMS architectures


A key focus of the journey into the inevitable heterogeneous computing future, with the promise of exascale for everyone, is software. All of us at Intel are working tirelessly to enable this future through full-stack solutions that are powered by highly performant, open, and productive software.

oneAPI is a cross-industry, open, standards-based unified programming model that delivers performance across multiple architectures. It promises developers a common XPU programming experience while leaving no transistor behind. oneAPI aims to enhance developer productivity by allowing code reuse across both architectures and vendors, while eliminating the need to work with separate code bases, multiple programming languages, and different tools and workflows.

The oneAPI initiative encourages community and industry collaboration on the open oneAPI specification and compatible oneAPI implementations across the ecosystem. A growing number of companies and research organizations including OEMs, ISVs, CSPs, AI innovators, and universities endorse the oneAPI concept and are participating in the initiative. There are currently two beta implementations of the specification that are already available – The Intel oneAPI product for Intel hardware and Codeplay’s implementation for NVIDIA GPUs.

First announced at Intel Architecture Day in December 2018, oneAPI shows Intel’s consistent commitment to software and builds on our rich heritage of delivering class-leading software products to developers in HPC, AI, and other domains. It extends our robust software stack across architectures, builds on the decades of software engineering work that went into our various tools and libraries, and continues our “developer-first” approach of listening to the millions of developers in our ecosystem.

Looking Ahead


We have not let our commitment to workload-optimized architectural diversity end with SVMS. We are planning for the architectures of the future, such as quantum and neuromorphic, with research and development in next-generation computing. Our goal is to continue to offer the broadest set of architectures to address the widest set of workload compute problems in the world. We also intend to build on the early momentum of oneAPI and advance the initiative further over the coming years, in close collaboration with industry partners and thought leaders. We see oneAPI as a cross-architecture industry standard that will have multi-vendor adoption.

We welcome you to join us in the quest for exascale computing and connectivity for all!

Written by Jeff McVeigh and Srinivas Chennupaty

Jeffrey (Jeff) S. McVeighJeff_McVeigh_02_lrg_web.jpg
Vice President, Intel Architecture, Graphics and Software
General Manager, Data Center XPU Products & Solutions

 

 

 

 

 

Srinivas_Chennupaty_good.jpgSrinivas Chennupaty
Vice President & CTO , Intel Architecture, Graphics, Software
General Manager, XPU Architecture Technology Roadmap


 


 

 

 

Notices and Disclaimers
All product plans and roadmaps are subject to change without notice.
Intel technologies may require enabled hardware, software or service activation.
No product or component can be absolutely secure.
Your costs and results may vary.