On April 2, Intel announced the new 10-nm family of Intel® Agilex™ FPGAs. (See “Intel Driving Data-Centric World with New 10nm Intel Agilex FPGA Family” for more information.) Among the many innovations included in the new Intel Agilex FPGA device family is inclusion of a high-bandwidth, low-latency Compute Express Link (CXL) coherent processor interface as a hard IP block. Intel spent four years developing the CXL specification and, in March, Intel joined eight other founding members – including Alibaba Group, Cisco, Dell EMC, Facebook, Google, Hewlett Packard Enterprise (HPE), Huawei, and Microsoft – to announce the establishment of a consortium to develop CXL as an open interconnect technology for accelerating computationally intense workloads using software-driven CPUs and purpose-built, hardware accelerators. More information about the CXL consortium is available here.
CXL is an open, standard interconnect that provides high-performance connectivity between one or more host processors and other subsystems or devices including accelerators, memory buffers, and smart I/O devices. CXL is based on the PCI Express (PCIe) 5.0 physical layer infrastructure and is designed to address the explosion in high-performance computational workloads through heterogeneous processing and memory systems. Applications in artificial intelligence and machine learning (AI/ML), communications and networking systems, and high-performance computing (HPC) all benefit from the performance gains enabled by CXL’s coherency and memory semantics.
The CXL interconnect protocols run on top of the PCIe 5.0 PHY, using x16, x8, and x4 link widths. CXL 1.0 debuted at a transfer rate of 32 GT/s, which translates into a transfer rate of 64 GB/s bandwidth in each direction. The CXL standard supports both standard PCIe devices and CXL devices on the same link.
Leveraging the PCIe 5.0 infrastructure makes it easier for devices and platforms to adopt the CXL standard without having to design and validate a new high-speed PHY, to characterize a new channel, or to develop new channel-extension devices such as retimers.
The CXL standard includes three protocols:
- The CXL.io protocol is based on existing PCIe protocols and uses standard PCIe functions including device discovery, configuration, initialization, I/O virtualization, and direct memory access (DMA).
- The CXL.cache protocol employs a simple request/response protocol that allows a connected device to cache data obtained from the host CPU’s memory. The host processor manages the coherency of data cached at the device level using cache-snoop messaging.
- The CXL.memory protocol allows host processors to directly access memory attached to other CXL devices in a cache-coherent manner. CXL.memory transactions consist of simple load/store transactions.
While CXL.io implementations can reuse much of the PCIe software infrastructure with existing device drivers and system software, the drivers and software will necessarily require enhancements to fully exploit the CXL.cache and CXL.memory capabilities.
The CXL Specification 1.0 document is available for download here.
More information about the new Intel Agilex FPGA family is available in the White Paper titled “Intel® Agilex™ FPGAs Deliver a Game Changing Combination for the Data Centric World.”