Intel and Microsoft Advance Edge to Cloud Inference for AI

Validated developer kits with integrated software tools are making it easier to deploy inference in the cloud and at the edge on multiple hardware types

These days, open source frameworks, toolkits, sample applications and hardware designed for deep learning are making it easier than ever to develop applications for AI. That’s exciting, especially when it comes to opportunities that connect edge to cloud. From retail stores to factory floors, companies are bringing AI into the real world to deliver amazing experiences, work more efficiently and pursue new business models.

One of the most exciting areas I see in AI at the edge is computer vision, which offers promising use cases across industries. By performing inference on edge devices instead of relying on a connection to the cloud, users can achieve low latency for near-real-time results. Edge deployments can also help address issues related to data privacy and bandwidth.

While cloud developers have a platform for training models and deploying inference in the cloud, they need the right tools to deploy at the edge — another challenge entirely. Now they have help fine-tuning their models across different hardware types, including processors and accelerator cards, so they can deploy the same inference model in many different environments.

Intel and Microsoft streamline development with integrated tools

Given the huge opportunities available with inference, Intel and Microsoft have joined forces to create development tools that make it easier for you to use the cloud, the edge or both, depending on your need. The latest is an execution provider (EP) plugin that integrates two valuable tools: the Intel Distribution of OpenVINO toolkit and Open Neural Network Exchange (ONNX) Runtime. The goal is to give you the ability to write once and deploy everywhere — in the cloud or at the edge.

The unified ONNX Runtime with OpenVINO plugin is now in public preview and available on Microsoft’s GitHub page. This capability has been validated with new and existing developer kits. The public preview publishes prebuilt Docker container base images. That’s important because you can integrate it with your ONNX model and application code.

Deploy inferencing on your preferred hardware

The EP plugin allows AI developers to train models in the cloud and then easily deploy them at the edge on diverse hardware types, such as Intel CPUs, integrated GPUs, FPGAs or VPUs, including the Intel Neural Compute Stick 2 (Intel NCS 2). Using containers means the same application can be deployed in the cloud or at the edge. Having that choice matters.

The EP plugin has also been validated with the ONNX Model Zoo. If you haven’t heard of it, it’s a collection of pretrained models in the ONNX format.

Jonathan Ballon, vice president and general manager in the Intel Internet of Things Group, said this plugin gives developers greater flexibility in how they work. “AI development is maturing quickly, and thanks to next-generation tools, we are now entering a world of new opportunities for bringing AI to the edge. Our goal is to empower developers to work the way they want and then deploy on the Intel hardware that works best for their solution, no matter which framework or hardware type they use. The choice is up to them.”

We’re talking about empowering developers. That’s why Microsoft released ONNX Runtime as an open source, high-performance inference engine for machine learning and deep learning models in the ONNX open format. That means developers can choose the best framework for their workloads: think PyTorch or TensorFlow. It also improves scoring latency and efficiency on many different kinds of hardware. The upshot is developers can use ONNX Runtime with tools like Azure Machine Learning service to seamlessly deploy their models at the edge.

Venky Veeraraghavan, group program manager at Microsoft Azure AI + ML, summed it up perfectly when he said, “Many developers use Azure to develop machine learning models. ONNX Runtime’s integration with OpenVINO enables a seamless path for these models to be deployed on a wide range of edge hardware.”

Fewer steps with validated developer kits

OK, now to the developer kits I mentioned earlier. We have been incredibly successful working together with select partners to offer kits validated for the OpenVINO and ONNX Runtime integration. These kits offer a range of CPUs and accelerator options for extra processing power, so you can choose the right combination and level of compute for your project. The kits also connect easily to Azure, enabling data to be immediately shared with the cloud and visualized on a dashboard.

With developer kits from our partners, developers get a validated bundle of hardware and software tools that allows them to prototype, test and deploy a complete solution. You can also skip much of the work that comes with creating a solution for inference at the edge. The kits are fully scalable for mass deployment.

  • iEi FLEX-BX200 — Enormous computational power to perform accurate inference and prediction in near-real time, especially in harsh environments
  • AAEON BOXER-6841M — Turnkey development on the AAEON IoT platform, which is based on Azure services and enables developers and system integrators to quickly evaluate their solutions
  • UP Squared AI Vision X Developer Kit — Computer vision and deep learning from prototype to production
  • IEI TANK AIoT Developer Kit — Commercial production-ready development with deep learning, computer vision and AI

Download the new ONNX Runtime with OpenVINO EP plugin now

Our efforts with Microsoft will continue to focus on giving developers the flexibility to choose their preferred deep learning framework and run models efficiently anywhere. If you want to learn more about how you can ease the process of taking AI from cloud to edge, try the unified native installation of our EP plugin along with your choice of orchestration framework today.

Additional resources

Are you a developer looking for support to speed up your AI solution development? Here are some more useful resources:

Leave a Reply

Your email address will not be published. Required fields are marked *

Name *

This site uses Akismet to reduce spam. Learn how your comment data is processed.