Corerain’s CAISA stream engine transforms FPGA into Deep Learning Neural Network without HDL coding


With their thousands of on-chip multipliers, FPGAs provide ample computational resources needed to implement convolutional neural networks (CNNs), recurrent neural networks (RNNs), and other deep neural networks (DNNs) based on Deep Learning (DL) models. However, fully utilizing the capabilities of these resources has required expertise in hardware description language (HDL) programming using either Verilog or VHDL. FPGAs can be incorporated into systems as chips, designed into boards, or as programmable accelerator cards (PACs), which are plugged into existing system-expansion slots. Developers creating CNNs, RNNs, and DNNs are most likely to take the PAC-based approach and they need easy-to-use development tools that can quickly convert their AI/ML applications, developed using industry-standard frameworks, into fast implementations running on the selected FPGA-based PAC.

Corerain has developed a high-performance AI acceleration engine called the Custom AI Streaming Accelerator (CAISA), that is finding applications in a broad variety of markets including aerospace, finance, education, logistics, security, retail, medicine, and manufacturing. Streaming architectures can be very efficient because they eliminate instruction-control hardware. Corerain’s CAISA engine can extract as much as 90% of an FPGA’s theoretical peak performance potential (specifically, of an Intel® Arria® 10 GX 1150 FPGA) without the need for HDL programming†.

The CAISA engine is scalable, so it can be sized to fit in a variety of FPGAs. This flexibility allows application designers to scale a design for performance or cost, depending on the application requirements. Using Corerain’s CAISA engine and the associated RainBuilder end-to-end tool chain, AI/ML application developers can now take advantage of FPGA-level application performance while using familiar deep-learning (DL) frameworks such as TensorFlow, Caffe, and ONNX.

Rather than being limited to just one or a few neural networks, Corerain’s CAISA architecture supports nearly all of the CNN networks in use today. Corerain’s RainBuilder development tools can automatically convert models developed with popular AI/ML frameworks including TensorFlow and Caffe into applications that run directly and efficiently on the FPGA-based CAISA engine.

Corerain provides the CAISA engine as IP or directly incorporated into FPGA-based PACs. The RainBuilder tool chain works with either version of the CAISA engine. The available PACs include:

  • The Rainman Acceleration Card, based on an Intel Arria 10 SX 160 FPGA for front-end applications
  • The Nebula Acceleration Card, based on an Intel Arria 10 GX 1150 FPGA for edge and data center applications

CAISA-compatible PACs include boards based on the Intel Arria 10 and Intel Stratix® 10 FPGA families.

Corerain’s RainBuilder tool consists of three major modules – a compiler, runtime, and driver – as shown in Figure 1.


Figure 1: The Corerain Rainbuilder tool set includes a compiler, run-time package, and drivers.


  • The RainBuilder compiler module directly supports most popular DL frameworks including TensorFlow and Caffe, removing the need to directly tinker with the underlying FPGA resources. The RainBuilder compiler automatically extracts structure and coeffcients from a DL model and converts the DL model into a streaming-graph intermediate representation. During conversion, the compiler optimizes the model for the CAISA streaming architecture to guarantee efficient runtime acceleration.
  • The RainBuilder runtime module provides APIs that permit applications written in C or C++ to control the resulting CAISA-based model. The runtime module supports customized functions and can be extended with simulation kits and quantization modules.
  • The RainBuilder driver is a transparent firmware layer that automatically handles all the operations and I/O drivers for the CAISA engines. This layer abstracts the hardware details of the CAISA architecture, which results in a CPU/GPU like software-development experience.

Corerain has built a demonstration application based on the CAISA engine that might find use in a smart classroom, for example. This application is a multi-gesture classifier that can recognize people standing up, raising their hand, resting their head on a table, or just looking around. With this AI-enabled system installed in front of a classroom, students’ in-lesson behavior statistics can be used to provide feedback for teaching quality.

Here’s a short video demonstrating the application in operation:



Note: This blog is based on a new Solution Brief titled “Artifcial Intelligence and Machine Learning.” For more information about the CAISA engine, please contact Corerain directly.


Legal Notice and Disclaimers


† Tests measure performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at

Cost reduction scenarios described are intended as examples of how a given Intel- based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction.

Intel, the Intel logo, Intel Xeon, Intel Arria, and Intel eASIC are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© Intel Corporation


Published on Categories Acceleration, AI/ML, ArriaTags , ,
Steven Leibson

About Steven Leibson

Be sure to add the Intel Logic and Power Group to your LinkedIn groups. Steve Leibson is a Senior Content Manager at Intel. He started his career as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He’s served as Editor in Chief of EDN Magazine and Microprocessor Report and was the founding editor of Wind River’s Embedded Developers Journal. He has extensive design and marketing experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.