InAccel’s Accelerated ML suite boosts Spark ML performance by as much as 7x on FPGA-based Alibaba Cloud f1 instances

InAccel has developed an integrated machine-learning (ML) set of tools called the Accelerated ML suite that delivers as much as a 7x performance increase for applications such as logistic regression, K-means clustering, and gradient-boosted trees (GBTs) using XGBoost by leveraging FPGA acceleration available through the Alibaba Cloud’s f1 instance, which is based on Intel® Xeon® processors and Intel Arria® 10 FPGAs. The Accelerated ML suite IP cores for these workloads are instantiated on Intel Arria 10 FPGAs in the Alibaba Cloud f1 instances.

According to InAccel, its Accelerated ML suite offers more than a 7x kernel performance improvement for ML training on logistic regression and a 5x overall system performance, which includes the time to transfer data and the CPU-based data preprocessing. For K-means clustering, the InAccel IP cores provide a 6.4x kernel performance improvement for ML training and an overall 4.3x system performance improvement when including the data preprocessing and the communications between the Alibaba Cloud’s host Intel Xeon CPUs and the Intel Arria 10 FPGAs in the Alibaba Cloud f1 instance.

The net result is faster ML model training.

InAccel also provides an FPGA resource manager that allows you to scale the performance of these applications by using multiple FPGAs per server and multiple servers with FPGAs. InAccel’s FPGA resource manager also allows you to share the Alibaba Cloud f1 instance’s FPGA resources among multiple applications using virtualization.

Additional details are in the just-published InAccel blog.


Legal Notice and Disclaimers

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at

Intel does not control or audit third-party data. You should review this content, consult other sources, and confirm whether referenced data are accurate.

Cost reduction scenarios described are intended as examples of how a given Intel- based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction.

Intel, the Intel logo, Intel Xeon, Intel Arria, and Intel eASIC are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.


Published on Categories Acceleration, AI/ML, ArriaTags , ,
Steven Leibson

About Steven Leibson

Be sure to add the Intel Logic and Power Group to your LinkedIn groups. Steve Leibson is a Senior Content Manager at Intel. He started his career as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He’s served as Editor in Chief of EDN Magazine and Microprocessor Report and was the founding editor of Wind River’s Embedded Developers Journal. He has extensive design and marketing experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.