Expand Server Memory Capacity and Improve Server Performance with Intel® Optane™ DC Persistent Memory and Intel® Stratix® 10 FPGAs

Data center workload capacity challenges can be handled by deploying additional Intel® Xeon® processors, by parallelizing workloads across multiple CPU cores, and by using hardware accelerators such as custom GPGPUs, ASICs, and FPGAs. Intel FPGA workload accelerators complement Intel Xeon processors in the data center to help meet varied performance needs across diverse workloads. FPGAs are often the accelerator of choice due to their reprogrammable nature, their relatively low power consumption, and their cost effectiveness.

Until now, FPGA accelerators deployed in data centers have mostly conformed to a nearly universal two-tiered external memory architecture. This traditional two-tiered external memory architecture includes main memory (Tier 1) and storage (Tier 2), usually implemented with DRAM for Tier 1 and hard disk drives (HDDs) and/or solid-state drives (SSDs) for Tier 2, respectively.

Currently, Tier 1 memory generally employs low-latency, high-bandwidth double data-rate (DDR) SDRAMs, but DRAMs increasingly face density and cost challenges as server memory demands grow. Packing more memory into a server that has only a limited number of DIMM slots requires denser DRAMs. However, denser DRAMs present another challenge: higher-density DRAMs cost more per bit and DRAM cost represents a significant portion of a server’s overall cost.

Consequently, it’s not economically or physically practical to achieve large Tier 1 memory capacities using DRAMs, which means that most of the data in a data center usually resides in Tier 2 storage until needed. If the server’s CPU requests data that is not in its Tier 1 memory, that data must be fetched from Tier 2 storage, which incurs a significant data-access delay because the latency performance gap between Tier 1 main memory and Tier 2 storage is very large. Tier 1 main memory latency is measured in tens of nanoseconds and Tier 2 storage latency is measured in tens of microseconds or milliseconds. That’s at least a 1,000X latency difference.

For typical server configurations, Tier 1 main-memory densities are measured in gigabytes (GB) and Tier 2 storage densities are measured in terabytes (TB), because Tier 2 storage is much less expensive per bit than Tier 1 memory. That’s another 1,000X difference between Tier 1 memory and Tier 2 storage. Typical characteristics of Tier 2 storage include relatively low bandwidth, long access latency, low cost per bit, and high capacity relative to Tier 1 memory. Tier 2 storage differs from Tier 1 memory in another significant way: Tier 2 storage is persistent while Tier 1 memory based on DRAM is usually volatile. Tier 1 memory loses all of its stored data when the system power switches off, either due to a planned shutdown or due to a power failure. Tier 2 storage generally doesn’t forget.

Intel has introduced a new memory category to address the bandwidth, latency, and capacity gaps between Tier 1 memory and Tier 2 storage. That new type of memory is Intel® Optane™ DC persistent memory, which creates another, intermediate tier in the memory hierarchy. You can use intermediate-tier memory based on Intel Optane DC persistent memory modules to resolve a number of computing performance challenges.

Servers equipped with Intel Optane DC persistent memory modules can have much greater capacity to hold in-memory databases without incurring the prohibitive price tags of servers based exclusively on all-DRAM Tier 1 memory for storing those databases. Many mission-critical databases and other enterprise applications store large amounts of data in working memory. These mission-critical databases can also work well in non-volatile, fast-access, intermediate-tier memory based on Intel Optane memory technology. Memory-bound workloads benefit from Intel Optane DC persistent memory with its large capacity, high endurance, and greater bandwidth when compared to NAND SSDs.

Last week, Intel announced shipments of the new Intel Stratix 10 DX FPGAs. A soft-IP block for these new devices is capable of controlling Intel Optane DC persistent memory modules and has Intel® UPI and PCI-SIG compatible PCIe Gen4 x16 interfaces that can be used as high-speed connections to selected Intel® Xeon® Scalable processors and other CPUs. Consequently, Intel Stratix 10 DX FPGAs make an excellent foundation for developing memory-expansion subsystems for such CPUs.

 

For more information about using Intel® Stratix® 10 DX FPGAs to control Intel Optane DC persistent memory, please see the Intel White Paper titled “Expand Server Memory Capacity and Improve Server Performance with Intel® Stratix® 10 FPGAs and Intel Optane™ DC Persistent Memory.”

 

Legal Notices and Disclaimers:

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.

Results have been estimated or simulated using internal Intel analysis, architecture simulation and modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. For more complete information, visit www.intel.com/benchmarks. Intel does not control or audit third-party benchmark data or the websites referenced in this document. You should visit the referenced website and confirm whether referenced data are accurate.

Intel, the Intel logo, Intel Xeon, Intel Optane, and Intel Stratix are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

Other names and brands may be claimed as the property of others.

 

Published on Categories Optane, StratixTags ,
Steven Leibson

About Steven Leibson

Be sure to add the Intel Logic and Power Group to your LinkedIn groups. Steve Leibson is a Senior Content Manager at Intel. He started his career as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He’s served as Editor in Chief of EDN Magazine and Microprocessor Report and was the founding editor of Wind River’s Embedded Developers Journal. He has extensive design and marketing experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.