FPGA accelerates face recognition while protecting inference model through data encryption

 

The video below, made at the recent Embedded Vision Summit, demonstrates an Intel® FPGA performing face recognition through a machine language (ML) inference model. A CPU downloads the model to the FPGA in encrypted form and is stored that way in the FPGA’s external SDRAM. Decryption only occurs when the FPGA reads the model and stores it in the on-chip SRAM, thus protecting the IP associated with the face-detection IP.

 

 

 

For more information about Intel Vision products, click here.

 

You might also want to download the White Paper titled “Unleashing the Power of Intel® Vision Products for Facial Recognition using Deep Learning Inference Acceleration.”

 

Published on Categories Acceleration, AI/ML, Arria, VisionTags , , ,
Steven Leibson

About Steven Leibson

Be sure to add the Intel Logic and Power Group to your LinkedIn groups. Steve Leibson is a Senior Content Manager at Intel. He started his career as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He’s served as Editor in Chief of EDN Magazine and Microprocessor Report and was the founding editor of Wind River’s Embedded Developers Journal. He has extensive design and marketing experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.