Toolkit for Intel platforms

OpenVINO Toolkit Accelerates Computer Vision Development

17. Mai 2018, 9:26 Uhr | Frank Riemenschneider
Diesen Artikel anhören

Fortsetzung des Artikels von Teil 1

Technical details regarding the new OpenVINO toolkit

The OpenVINO toolkit enables the CNN-based deep learning inference on the edge.It supports heterogeneous execution across Intel CV accelerators, using a common API for the CPU, Intel Integrated Graphics, Intel Movidius Neural Compute Stick, and FPGAs, furthermore a library of CV functions and pre-optimized kernels is included as well as optimized calls for CV standards, including OpenCV, OpenCL, and OpenVX.

Model Optimizer Changes

The Model Optimizer component has been replaced by a Python-based application, with a consistent design across the supported frameworks. Key features are:
General changes:
◦Several CLI options have been deprecated since the last release.
◦More optimization techniques were added.
◦Usability, stability, and diagnostics capabilities were improved.
◦Microsoft Windows 10 support was added.
◦A total of more than 100 public models are now supported for Caffe, MXNet, and TensorFlow frameworks.
◦A framework is required for unsupported layers, and a fallback to the original framework is available for unsupported layers.

Caffe changes
◦The workflow was simplified, it's no longer required to install Caffe.
◦Caffe is no longer required to generate the Intermediate Representation for models that consist of standard layers and/or user-provided custom layers. User-provided custom layers must be properly registered for the Model Optimizer and the Inference Engine.
◦Caffe is now only required for unsupported layers that are not registered as extensions in the Model Optimizer.

TensorFlow support is significantly improved, and now offers a preview of the Object Detection API support for SSD-based topologies.

Inference Engine
•Added Heterogeneity support:
◦Device affinities via API are now available for fine-grained, per-layer control.
◦It's possible to specify a CPU fallback for layers that the FPGA does not support. For example, it's possible to specify HETERO: FPGA, CPU as a device option for Inference Engine samples.
◦It's also possible to use the fallback for CPU + Intel Integrated Graphics if there are custom layers implemented only on the CPU, and the developer wants to execute the rest of the topology on the Intel Integrated Graphics without rewriting the custom layer for the Intel Integrated Graphics.

•Asynchronous execution:
The Asynchronous API improves the overall application frame rate, allowing to perform secondary tasks, like next frame decoding, while the accelerator is busy with current frame inference.

•New customization features include easy-to-create Inference Engine operations:
◦Express the new operation as a composition of existing Inference Engine operations or register the operation in the Model Optimizer.
◦Connect the operation to the new Inference Engine layer in C++ or OpenCL. The existing layers are reorganized to “core” (general primitives) and “extensions” (topology-specific, such as DetectionOutput for SSD). These extensions now come as source code that you must build and load into your application. After the Inference Engine samples are compiled, this library is built automatically, and every sample explicitly loads the library upon execution. The extensions are also required for the pre-trained models inference.

Plugin support added for the Intel Movidius Neural Compute Stick hardware (Myriad2).


•Samples are provided for an increased understanding of the Inference Engine, APIs, and features:
◦All samples automatically support heterogeneous execution.
◦Async API showcase in Object Detection via the SSD sample.
◦Minimalistic Hello, classification sample to demonstrate Inference Engine API usage.

OpenCV
•Updated to version 3.4.1 with minor patches. Notable changes:
◦Implementation of on-disk caching of precompiled OpenCL kernels. This feature reduces initialization time for applications that use several kernels.
◦Improved C++ 11 compatibility on source and binary levels.

Added subset of OpenCV samples from the community version to showcase the toolkit capabilities:
◦bgfg_segm.cpp - background segmentation
◦colorization.cpp - performs image colorization using DNN module (download the network from a third-party site)
◦dense_optical_flow.cpp - dense optical flow using T-API (Farneback, TVL1)
◦opencl_custom_kernel.cpp - running custom OpenCL™ kernel via T-API
◦opencv_version.cpp - the simplest OpenCV application - prints library version and build configuration
◦peopledetect.cpp - pedestrian detector using built-in HOGDescriptor

OpenVX
•A new memory management scheme with the Imaging and Analytics Pipeline (IAP) framework drastically reduces memory consumption:
◦Introduces intermediate image buffers that result in a significant memory footprint reduction for complex Printing and Imaging (PI) pipelines operating with extremely large images.
◦Deprecated tile pool memory consumption reduction feature. Removed from the Copy Pipeline sample.

The OpenVX CNN path is not recommended for CNN-based applications and is partially deprecated:
◦CNN AlexNet sample is removed.
◦CNN Custom Layer (FCN8) and Custom Layers library are removed.
◦The OpenVX* SSD-based Object Detection web article is removed.
◦OpenVX FPGA plugin is deprecated. This is part of the CNN OVX deprecation.

The VAD tool for creating OpenVX applications is deprecated and removed.
•New recommendation is to use Deep Learning Inference Engine capabilities for CNN-based applications.

Anbieter zum Thema

zu Matchmaker+

  1. OpenVINO Toolkit Accelerates Computer Vision Development
  2. Technical details regarding the new OpenVINO toolkit

Matchmaker+