Insatiable Bandwidth Requirements Drive Cloud and 5G Data Centers Towards 400G

 

Simply stated, everything in the cloud desperately needs more bandwidth. Enterprise data centers need more bandwidth; hyperscalers, cloud builders, and HPC centers need more bandwidth; and the 5G rollout further exacerbates the cellular carriers’ need for more networking bandwidth to meet growing WAN capacity requirements. For all of these heavy users of networking equipment, adding more 100G Ethernet (100GE) ports will not meet the bandwidth challenges. Additional ports require more rack space for servers and switches and more floor space for server racks; these solutions are not economical. Migrating from 100G to 400G Ethernet (400GE) ports is a much less expensive way to pump more bandwidth into data centers.

According to Cisco’s Visual Networking Index (VNI) for 2017-2022, annual IP traffic will more than triple over the five years covered by the report, as shown in Figure 1. The report predicts that global IP traffic will reach 4.8 ZB (zettabytes) per year by 2022. That’s 396 EB (exabytes) per month. (An exabyte is 1018 bytes.) In 2017, the annual run rate for global IP traffic was “only” 1.5 ZB per year or 122 EB per month. The same Cisco VNI predicts that busy-hour Internet traffic, the busiest 60-minute period in the day, will increase by 4.8x over the same period.

 

Figure 1: Cisco’s Visual Networking Index predicts an increase in IP traffic of more than 3x from 2017 to 2022.

 

Although the traffic volume for all IP traffic is growing, video traffic is driving this growth. IP video traffic includes the exploding use of peer-to-peer video services such as Apple’s FaceTime, WeChat Video Calls, Facebook Live, Microsoft Skype; the rapidly growing number of VoD (video on demand) services including Netflix, Amazon Video, YouTube TV, Hulu, and the just-announced Disney+; and myriad managed IP video broadcasting services delivered through MSOs (multiple system operators including cable and satellite-broadcast providers). Cisco’s VNI predicts that video IP traffic will increase 3x over this same period and will represent 82 percent of all IP traffic by 2022, as shown in Figure 2.

 

Figure 2: According to Cisco’s Visual Networking Index, 82 percent of all IP traffic will carry video (the blue and green segments of each bar) by 2022.

 

Much of the IP video flowing through these networks will be consumed by mobile devices. The Cisco VNI predicts that 71 percent of all IP traffic will be mobile traffic by 2022. A significant portion of this mobile traffic will travel over both the cellular carriers’ WANs and the networks linking and inside of data centers. The storage servers in these data centers provide much of the information including video that flows through the global Internet.

Quadrupling or quintupling the number of data centers to handle the increased network traffic represents an unattractive and expensive proposition. In many cases, physically expanding existing data centers is either impossible or equally unattractive. Analysts including the Dell’Oro Group predict that public and private cloud providers and the cellular network carriers will solve their common bandwidth challenges within data centers by migrating to 400GE networks and switches.

As bandwidth requirements have grown, data-center networking architectures have evolved from a tree structure to an architecture based on leaf and spine switches tied by high-speed optical links, as shown in Figure 3. Today, experts estimate that five times more data moves inside of the data center than overall internet traffic. The simplest and most economical way to meet the challenge of increasing internal data-center bandwidth requirements is to migrate the leaf and spine switches within the data center to 400G optical links.

 

Figure 3: Current data-center architecture employs a leaf-and-spine network topology. (Image source: Intel)

 

Intel demonstrated 400G optical modules for these data-center applications at OFC 2018 and has provided samples of these modules to selected customers. In addition, Intel has demonstrated interoperability between the 58G PAM4 SerDes transceivers built into Intel® Stratix® 10 TX FPGAs and 400G plug-in optical modules from Intel and other vendors. A 400G optical module’s voracious bandwidth requirements can be handled by just eight 58G PAM4 SerDes transceivers. Intel Stratix 10 TX FPGAs are the first FPGAs in production with SerDes transceivers capable of bidirectional operation at 57.8 Gbps using PAM4 modulation.

The largest member of the Intel Stratix 10 TX FPGA family has sixty high-speed SerDes transceivers per device, each capable of operating at 57.8 Gbps using PAM4 modulation. All of these transceiver channels incorporate a dedicated Physical Medium Attachment (PMA) and a hardened Physical Coding Sublayer (PCS). The PMA provides primary interfacing capabilities to high-speed, physical channels and the PCS handles encoding/decoding, word alignment, and other preprocessing functions before transferring data to the FPGA core fabric.

One 400GE port requires eight 50 Gbps SerDes transceivers, so the largest member of the Intel Stratix 10 TX FPGA family can implement as many as five 400GE ports. Consequently, Intel Stratix 10 TX FPGAs make excellent implementation vehicles for new 400GE equipment designs. (Note: These same high-speed SerDes transceivers are dual-mode transceivers, and can be configured to operate at 28.9 Gbps using NRZ modulation. The 28.9 Gbps NRZ mode also doubles the number of available high-speed transceivers in an Intel Stratix 10 TX FPGA.)

The monolithic FPGA core in all Intel Stratix 10 devices is capable of operating at 1 GHz fMAX thanks to its HyperFlex® core architecture and Intel’s 14nm tri-gate process technology. The largest Intel Stratix 10 TX FPGA core contains 2.753 million logic elements and 5,760 variable-precision DSP blocks with hard floating-point and fixed-point computational capability, as well as multiple embedded SRAM memory blocks of various sizes.

Intel Stratix 10 FPGAs employ heterogeneous 3D System-in-Package (SiP) technology that integrates multiple die in a single package using Intel’s Embedded Multi-die Interconnect Bridge (EMIB) technology, which employs small silicon bridges to connect multiple die together in the same package, as shown in Figure 4. For Intel Stratix 10 FPGAs, one large die in the package contains the monolithic FPGA core. Other, smaller die, called tiles, provide a variety interfacing options for various members of the Intel Stratix 10 device family.

Figure 4: Intel Stratix 10 FPGAs and SoCs employ Intel’s EMIB interconnect technology to bond a monolithic FPGA die to several connectivity tiles that provide various I/O features and capabilities. (Image source: Intel)

 

Intel Stratix 10 TX FPGAs and SoCs employ as many as five “E-tiles” to implement the devices’ many 58 Gbps PAM4 SerDes transceivers. In PAM-4 mode, each transceiver channel on an E-tile supports data rates to 57.8 Gbps and targets short- and long-reach electrical specifications for new and emerging standards including OIF CEI 56 LR, MR, and VSR. Advanced equalization circuits incorporated into these high-speed SerDes transceivers achieve the bit error rates (BER) required by most high-speed serial protocols and these transceivers can support legacy and high-loss backplanes at high data rates.

A 400GE design requires high-speed Reed-Solomon forward error correction (FEC) and a full 400GE protocol stack. An Intel Stratix 10 TX FPGA meets these requirements by implementing the FEC and the lowest levels of the protocol stack in fixed hardware, located in the Intel Stratix 10 TX FPGA’s E-tile, while the higher portions of the 400GE protocol stack are implemented with programmable logic within the FPGA fabric.

Implementing a 400GE solution using an Intel Stratix 10 TX FPGA requires more than fast SerDes transceivers. The FPGA’s internal logic fabric must handle the extreme data rates of multiple high-speed data streams passing through the SerDes transceivers. For 400GE design solutions, the FPGA fabric must be able to operate at a minimum clock rate of 366 MHz. Intel Stratix 10 TX FPGAs with their performance-doubling HyperFlex core architecture and 1 GHz fMAX easily achieve this minimum clock rate.

A tested reference design for a 400GE port based on the Intel Stratix 10 TX FPGA, proven interoperable with multiple vendors’ products through testing and plug fests, is available from Intel.

 

Reference:

Cisco Visual Networking Index: Forecast and Trends, 2017–2022.

 

Where to Find More Information

For more information about Intel and Intel Stratix 10 FPGAs, visit https://www.intel.com/content/www/us/en/products/programmable/fpga/stratix-10.html

 

Published on Categories Communications, Networking, Photonics, StratixTags , , ,
Steven Leibson

About Steven Leibson

Be sure to add the Intel Logic and Power Group to your LinkedIn groups. Steve Leibson is a Senior Content Manager at Intel. He started his career as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He’s served as Editor in Chief of EDN Magazine and Microprocessor Report and was the founding editor of Wind River’s Embedded Developers Journal. He has extensive design and marketing experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.