The iAbra PathWorks toolkit brings embedded AI inference, real-time video recognition to the edge with Intel® Arria® 10 FPGAs, Intel® Xeon® Gold CPUs, and Intel® Atom® processors

It’s not always easy to get data to the cloud. Multi-stream computer vision applications, for example, are extremely data intensive and can overwhelm even 5G networks. A company named iAbra has created tools that build neural networks that run on FPGAs in real time, so that inference can be carried out at the edge in small, light, low-power embedded devices rather than in the cloud. Using what-you-see-is-what-you-get (WYSIWYG) tools, iAbra’s PathWorks toolkit creates neural networks that run on an Intel® Atom® x7-E3950 processor and an Intel® Arria® 10 FPGA in the embedded platform. The tools themselves run on an Intel® Xeon® Gold 6148 CPU to create the neural networks.

From a live video stream, artificial intelligence (AI) can detect, for example, how many people are standing in a bus queue, which modes of transport people are using, and where there is flooding or road damage. In exceptional circumstances, AI can also alert emergency services if vehicles are driving against the traffic flow or if pedestrians have suddenly started running. Collecting reliable, real-time data from the streets and compressing it through AI inference makes it far easier to manage resources and to improve quality of life, productivity, and emergency response times in Smart Cities.

To be effective, these vision applications must process a huge amount of data in real time. A single HD stream generates 800 to 900 megabits of streaming video data per second. That’s per camera. Although broadband 5G networks deliver more bandwidth and can greatly increase the device density within geographic regions, broadly and densely distributed armadas of video cameras still risk overwhelming these networks. The solution to this bandwidth constraint is to incorporate real-time AI inference at the network edge so that only the processed, essential information is sent to the cloud. That sort of processing requires an embedded AI device that can withstand the harsh environments and resource constraints found on the edge.

iAbra has approached the problem of building AI inference into embedded devices by mimicking the human brain using FPGAs. Usually, image recognition solutions map problems to generic neural networks, such as ResNet. However, such networks are too big to fit into many FPGAs destined for embedded use. Instead, iAbra’s PathWorks toolkit constructs a new, unique neural network for each problem, which is tailored and highly optimized for the target FPGA architecture where it will run. In this case, the target architecture is an Intel Arria 10 FPGA.

“We believe the Intel Arria 10 FPGA is the most efficient part for this application today, based on our assessment of the performance per watt,” said iAbra’s CTO Greg Compton. “The embedded platform also incorporates the latest generation Intel Atom processor, which provides a number of additional instructions for matrix processing over the previous generation. That makes it easier to do vector processing tasks. When we need to process the output from the neural network, we can do it faster with instructions that are better attuned to the application,” Compton explains. He adds: “A lot of our customers are not from the embedded world. By using Intel Atom processors, we enable them to work within the tried and tested Intel® architecture stack they know.” Similarly, said Compton: “We chose the Intel Xeon Gold 6148 processor for the network creation step as much for economics as performance.”

iAbra developed this solution using OpenCL, a programming framework that makes FPGA programming more accessible by using a language similar to C, enabling code portability across different types of processing devices. iAbra also uses Intel® Quartus® Prime Software for FPGA design and development and the Intel® C++ Compiler to develop software. The company has incorporated Intel® Math Kernel Library (Intel® MKL), which provides optimized code for mathematical operations across a range of processing platforms.

Compton continues:

“With Intel MKL, Intel provides highly optimized shortcuts to a lot of low-level optimizations that really help our programmer productivity. OpenCL is an intermediate language that enables us to go from the high level WYSIWYG world to the low-level transistor bitmap world of FPGAs. We need shortcuts like these to reduce the problem domains, otherwise developing software like ours would be too big a problem for any one organization to tackle.”

iAbra participates in the Intel FPGA Partner Program and Intel® AI Builders Program, which gives the company access to the Intel® AI DevCloud. “The Intel® AI DevCloud enables us to get cloud access to the very latest hardware, which may be difficult to get hold of, such as some highly specialized Intel® Stratix® 10 FPGA boards. It gives us a place where Intel customers can come and see our framework in a controlled environment, enabling them to try before they buy. It helped us with our outreach for a Smart Cities project recently. It’s been a huge help to have Intel’s support as we refine our solution, and develop our code using Intel’s frameworks and libraries. We’ve worked closely with the Intel engineers, including helping them to improve the OpenCL compiler by providing feedback as one of its advanced users,” Compton concludes.

 

For more information about the iAbra Pathworks toolkit, please see the new Case Study titled “Bringing AI Inference to the Edge.”

 

Intel’s silicon and software portfolio empowers our customers’ intelligent services from the cloud to the edge.

 

 

Notices & Disclaimers

 

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.

Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions.  Any change to any of those factors may cause the results to vary.  You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.   For more complete information visit www.intel.com/benchmarks.

Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available ​updates.  See backup for configuration details.  No product or component can be absolutely secure.

Your costs and results may vary.

Intel technologies may require enabled hardware, software or service activation.

© Intel Corporation.  Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.  Other names and brands may be claimed as the property of others.  ​

Published on Categories 5G, Acceleration, Arria, Communications, Networking, Quartus, Video, VisionTags , , , ,
Steven Leibson

About Steven Leibson

Be sure to add the Intel Logic and Power Group to your LinkedIn groups. Steve Leibson is a Senior Content Manager at Intel. He started his career as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He’s served as Editor in Chief of EDN Magazine and Microprocessor Report and was the founding editor of Wind River’s Embedded Developers Journal. He has extensive design and marketing experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.