A quick discussion on Computer vision, AI inference at scale & the latest release of our Intel® distribution of OpenVINO™ toolkit

We recently caught up with Soren Knudsen, OpenVINO product manager, to discuss the latest release of OpenVINO and get his perspective on the AI market and the role that OpenVINO is playing.

What has your team and Intel learned since OpenVINO launched in 2018?
More than anything, that it was a welcome addition to the ecosystem, which is very exciting! We had the hypothesis that available tool sets were limiting customers who wanted an inference solution that worked from data center to edge, across multiple hardware backends, but found existing one-size-fits-all products to be either too power-hungry & expensive or lacking flexibility and scalability. OpenVINO met this need for the developer community to help them unleash AI in a bigger way, letting them optimize and deploy trained neural networks on their choice of Intel systems.

The immediate adoption and engagement we have seen tells me that we identified a clear gap in the tools available to our AI developer community. Through the Intel® AI: In Production program, our ecosystem partners have taken OpenVINO into production across multiple industries, with solutions including Age-Related Macular Degeneration Detection by IEI and QNAP, Public Safety for the World Cup Stadium by Axxonsoft, and Real-time Theft Detection for Retail Stores by Advantech. As they’ve brought solutions to market, they’ve simultaneously brought us insight and feedback on how to improve the OpenVINO toolkit moving forward.

For example, we had developers who loved the OpenVINO toolkit and were embracing deep learning but needed less of a learning curve to get started. For them, our upcoming release includes a GUI front end called Deep Learning Workbench. This allows developers to easily profile and visualize throughput, latency and accuracy of deep learning models on various architectures for fine-tuning performance, while boosting performance by lowering precision to int8 or Winograd, which is essential for edge devices that may already be oversubscribed with heavy workloads.

What other improvements are on tap for OpenVINO as you look ahead?
We’re constantly looking for ways to improve performance and usability, and that will continue. I’m also obsessed with helping the industry mature and strengthen the overall OpenVINO-based solution portfolio available to customers. We have customers looking to implement AI solutions from widgets to X-ray machines to industrial robots and everything in between. The pace and competition to bring solutions to the market means our customers are having to make quick decisions based on infrastructure realities. However, like any projects, you need to have that end state in mind as you build your development journey. We can help the industry break the barriers that will help them accelerate this journey, and avoid costly miscues.
We have had a series of recent updates to OpenVINO including the most recent Release 3 (“2019 R3”)…what are the updates customers should be focusing on as they update their toolkits?
First, I’m looking forward to greater adoption of the custom-layer feature we added in R2 for Movidius processors. This will help partners better leverage our HDDL product for more custom builds.
Also, as mentioned before, the DL workbench in R3 has the potential to unleash some of our best developer partners into the world of deep learning. It is easy to use and a great tool to run a barrage of what-if scenarios to maximize performance and throughput.
Lastly, performance just keeps getting better and with our latest release we see another step forward. For anyone looking at edge inference, examining the end-state first to understand performance variables is critical. Wattage, frame rate, $ are just a few of the factors we look at when we talk about performance.
Here are some of our latest numbers that should be helpful as you evaluate your plans.
Anything else you want to share?
As more companies now deploy AI inference at scale in the real world, the need to simplify development between the vastly different environments of data centers and edge devices has never been more important. Tools like the OpenVINO toolkit are helping the industry bring these worlds together, and I am excited to see what the coming months hold for our developers, their customers, and this community.

___

Soren is veteran product planner in the software space and is one of our key Intel employees helping develop Intel disctibution of the OpenVINO toolkit. From beta to the first public launch, he has demonstrated an obsession to addressing the needs of the market and our customers while supporting a culture of innovation and creativity for his team of technical professionals.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.