Last month, Intel announced the company’s first AI-optimized FPGA, the Intel® Stratix® 10 NX FPGA. (See “Intel has just announced its first AI-optimized FPGA – the Intel® Stratix® 10 NX FPGA – to address the rapid increase in AI model complexity.”) Earlier this week, The Linley Newsletter and EEJournal provided their perspectives on this announcement in articles written by Linley Gwennap and Kevin Morris, respectively.
Gwennap’s article in the Linley Newsletter, titled “Stratix 10 NX Adds AI Blocks,” said:
“Intel is determined to cover all aspects of the AI-accelerator market. Its latest offering for neural networks is the Stratix 10 NX, a new family in its 14nm FPGA line that revamps the DSP block to improve AI performance.”
The Linley Group also published a much longer version of this article with additional details and analysis in its Microprocessor Report newsletter (paid subscription required).
Morris’ long article on the EEJournal.com Web site, titled “Intel Announces Stratix 10 NX,” said:
“In the case of the new Stratix 10 NX [FPGA], the company is going after the AI inference market primarily via new AI-optimized arithmetic blocks called AI Tensor Blocks. These blocks would have previously been called “DSP” blocks, but the new versions contain dense arrays of lower-precision multipliers typically used for AI model arithmetic.”
Morris’ article also discusses a larger aspect of Intel’s work on AI, specifically the Intel® oneAPI development project, which is aimed at creating a standards-based, unified programming model for CPUs, GPUs, FPGAs, and other hardware accelerators. Morris writes:
“…Intel seems to be hanging their hat on their ambitious “oneAPI” which is a standards-based, unified programming model that aims to facilitate integration of heterogeneous Xeon-based platforms with various accelerators such as FPGAs. Intel’s approach makes sense, given the breadth of their offering…”
For more information, please see both of these recent articles.
Intel’s silicon and software portfolio empowers our customers’ intelligent services from the cloud to the edge.
Notices & Disclaimers
Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.
Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Intel technologies may require enabled hardware, software or service activation.
No product or component can be absolutely secure.
Your costs and results may vary.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.