Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/azaytsev/docs 2021 1 #2560

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 0 additions & 3 deletions build-instruction.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,6 @@ The open source version of Inference Engine includes the following plugins:
| MYRIAD plugin | Intel® Movidius™ Neural Compute Stick powered by the Intel® Movidius™ Myriad™ 2, Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X |
| Heterogeneous plugin | Heterogeneous plugin enables computing for inference on one network on several Intel® devices. |

Inference Engine plugin for Intel® FPGA is distributed only in a binary form,
as a part of [Intel® Distribution of OpenVINO™].

## Build on Linux\* Systems

The software was validated on:
Expand Down
1 change: 0 additions & 1 deletion docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,6 @@ inference of a pre-trained and optimized deep learning model and a set of sample
* [Supported Devices](supported_plugins/Supported_Devices.md)
* [GPU](supported_plugins/CL_DNN.md)
* [CPU](supported_plugins/CPU.md)
* [FPGA](supported_plugins/FPGA.md)
* [VPU](supported_plugins/VPU.md)
* [MYRIAD](supported_plugins/MYRIAD.md)
* [HDDL](supported_plugins/HDDL.md)
Expand Down
2 changes: 1 addition & 1 deletion docs/IE_DG/Glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ Glossary of terms used in the Inference Engine
| :--- | :--- |
| Batch | Number of images to analyze during one call of infer. Maximum batch size is a property of the network and it is set before loading of the network to the plugin. In NHWC, NCHW and NCDHW image data layout representation, the N refers to the number of images in the batch |
| Blob | Memory container used for storing inputs, outputs of the network, weights and biases of the layers |
| Device (Affinitity) | A preferred Intel(R) hardware device to run the inference (CPU, GPU, FPGA, etc.) |
| Device (Affinitity) | A preferred Intel(R) hardware device to run the inference (CPU, GPU, etc.) |
| Extensibility mechanism, Custom layers | The mechanism that provides you with capabilities to extend the Inference Engine and Model Optimizer so that they can work with topologies containing layers that are not yet supported |
| <code>ICNNNetwork</code> | An Interface of the Convolutional Neural Network that Inference Engine reads from IR. Consists of topology, weights and biases |
| <code>IExecutableNetwork</code> | An instance of the loaded network which allows the Inference Engine to request (several) infer requests and perform inference synchronously or asynchronously |
Expand Down
2 changes: 1 addition & 1 deletion docs/IE_DG/Intro_to_Performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ latency penalty. So, for more real-time oriented usages, lower batch sizes (as l
Refer to the [Benchmark App](../../inference-engine/samples/benchmark_app/README.md) sample, which allows latency vs. throughput measuring.

## Using Async API
To gain better performance on accelerators, such as VPU or FPGA, the Inference Engine uses the asynchronous approach (see
To gain better performance on accelerators, such as VPU, the Inference Engine uses the asynchronous approach (see
[Integrating Inference Engine in Your Application (current API)](Integrate_with_customer_application_new_API.md)).
The point is amortizing the costs of data transfers, by pipe-lining, see [Async API explained](@ref omz_demos_object_detection_demo_ssd_async_README).
Since the pipe-lining relies on the availability of the parallel slack, running multiple inference requests in parallel is essential.
Expand Down
10 changes: 6 additions & 4 deletions docs/IE_DG/Samples_Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,11 @@ You can download the [pre-trained models](@ref omz_models_intel_index) using the

The officially supported Linux* build environment is the following:

* Ubuntu* 16.04 LTS 64-bit or CentOS* 7.4 64-bit
* GCC* 5.4.0 (for Ubuntu* 16.04) or GCC* 4.8.5 (for CentOS* 7.4)
* CMake* version 2.8.12 or higher
* Ubuntu* 18.04 LTS 64-bit or CentOS* 7.6 64-bit
* GCC* 7.5.0 (for Ubuntu* 18.04) or GCC* 4.8.5 (for CentOS* 7.6)
* CMake* version 3.10 or higher

> **NOTE**: For building samples from the open-source version of OpenVINO™ toolkit, see the [build instructions on GitHub](https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md).

To build the C or C++ sample applications for Linux, go to the `<INSTALL_DIR>/inference_engine/samples/c` or `<INSTALL_DIR>/inference_engine/samples/cpp` directory, respectively, and run the `build_samples.sh` script:
```sh
Expand Down Expand Up @@ -99,7 +101,7 @@ for the debug configuration — in `<path_to_build_directory>/intel64/Debug/`.
The recommended Windows* build environment is the following:
* Microsoft Windows* 10
* Microsoft Visual Studio* 2017, or 2019
* CMake* version 2.8.12 or higher
* CMake* version 3.10 or higher

> **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.

Expand Down
6 changes: 2 additions & 4 deletions docs/IE_DG/inference_engine_intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Introduction to Inference Engine {#openvino_docs_IE_DG_inference_engine_intro}

After you have used the Model Optimizer to create an Intermediate Representation (IR), use the Inference Engine to infer the result for a given input data.

Inference Engine is a set of C++ libraries providing a common API to deliver inference solutions on the platform of your choice: CPU, GPU, VPU, or FPGA. Use the Inference Engine API to read the Intermediate Representation, set the input and output formats, and execute the model on devices. While the C++ libraries is the primary implementation, C libraries and Python bindings are also available.
Inference Engine is a set of C++ libraries providing a common API to deliver inference solutions on the platform of your choice: CPU, GPU, or VPU. Use the Inference Engine API to read the Intermediate Representation, set the input and output formats, and execute the model on devices. While the C++ libraries is the primary implementation, C libraries and Python bindings are also available.

For Intel® Distribution of OpenVINO™ toolkit, Inference Engine binaries are delivered within release packages.

Expand All @@ -13,7 +13,7 @@ To learn about how to use the Inference Engine API for your application, see the

For complete API Reference, see the [API Reference](usergroup29.html) section.

Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel&reg; hardware device: CPU, GPU, VPU, FPGA, etc. Each plugin implements the unified API and provides additional hardware-specific APIs.
Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel&reg; hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs.

Modules in the Inference Engine component
---------------------------------------
Expand Down Expand Up @@ -53,7 +53,6 @@ For each supported target device, Inference Engine provides a plugin — a DLL/s
| ------------- | ------------- |
|CPU| Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE |
|GPU| Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics
|FPGA| Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA, Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA (Speed Grade 2) |
|MYRIAD| Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X|
|GNA| Intel&reg; Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel&reg; Pentium&reg; Silver J5005 Processor, Intel&reg; Pentium&reg; Silver N5000 Processor, Intel&reg; Celeron&reg; J4005 Processor, Intel&reg; Celeron&reg; J4105 Processor, Intel&reg; Celeron&reg; Processor N4100, Intel&reg; Celeron&reg; Processor N4000, Intel&reg; Core&trade; i3-8121U Processor, Intel&reg; Core&trade; i7-1065G7 Processor, Intel&reg; Core&trade; i7-1060G7 Processor, Intel&reg; Core&trade; i5-1035G4 Processor, Intel&reg; Core&trade; i5-1035G7 Processor, Intel&reg; Core&trade; i5-1035G1 Processor, Intel&reg; Core&trade; i5-1030G7 Processor, Intel&reg; Core&trade; i5-1030G4 Processor, Intel&reg; Core&trade; i3-1005G1 Processor, Intel&reg; Core&trade; i3-1000G1 Processor, Intel&reg; Core&trade; i3-1000G4 Processor
|HETERO|Automatic splitting of a network inference between several devices (for example if a device doesn't support certain layers|
Expand All @@ -65,7 +64,6 @@ The table below shows the plugin libraries and additional dependencies for Linux
|--------|------------------------|-------------------------------------------------|--------------------------|--------------------------------------------------------------------------------------------------------|
| CPU | `libMKLDNNPlugin.so` | `libinference_engine_lp_transformations.so` | `MKLDNNPlugin.dll` | `inference_engine_lp_transformations.dll` |
| GPU | `libclDNNPlugin.so` | `libinference_engine_lp_transformations.so`, `libOpenCL.so` | `clDNNPlugin.dll` | `OpenCL.dll`, `inference_engine_lp_transformations.dll` |
| FPGA | `libdliaPlugin.so` | `libdla_compiler_core.so`, `libdla_runtime_core.so`, `libcrypto.so`, `libalteracl.so`, `liblpsolve5525.so`, `libprotobuf.so`, `libacl_emulator_kernel_rt.so` | `dliaPlugin.dll` | `dla_compiler_core.dll`, `dla_runtime_core.dll`, `crypto.dll`, `alteracl.dll`, `lpsolve5525.dll`, `protobuf.dll`, `acl_emulator_kernel_rt.dll`
| MYRIAD | `libmyriadPlugin.so` | `libusb.so`, `libinference_engine_lp_transformations.so` | `myriadPlugin.dll` | `usb.dll`, `inference_engine_lp_transformations.dll` |
| HDDL | `libHDDLPlugin.so` | `libbsl.so`, `libhddlapi.so`, `libmvnc-hddl.so`, `libinference_engine_lp_transformations.so`| `HDDLPlugin.dll` | `bsl.dll`, `hddlapi.dll`, `json-c.dll`, `libcrypto-1_1-x64.dll`, `libssl-1_1-x64.dll`, `mvnc-hddl.dll`, `inference_engine_lp_transformations.dll` |
| GNA | `libGNAPlugin.so` | `libgna.so`, `libinference_engine_lp_transformations.so` | `GNAPlugin.dll` | `gna.dll`, `inference_engine_lp_transformations.dll` |
Expand Down
4 changes: 2 additions & 2 deletions docs/IE_DG/supported_plugins/CPU.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ OpenVINO™ toolkit is officially supported and validated on the following platf

| Host | OS (64-bit) |
| :--- | :--- |
| Development | Ubuntu* 16.04/CentOS* 7.4/MS Windows* 10 |
| Target | Ubuntu* 16.04/CentOS* 7.4/MS Windows* 10 |
| Development | Ubuntu* 18.04, CentOS* 7.5, MS Windows* 10 |
| Target | Ubuntu* 18.04, CentOS* 7.5, MS Windows* 10 |

The CPU Plugin supports inference on Intel® Xeon® with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel® Core™
Processors with Intel® AVX2, Intel Atom® Processors with Intel® Streaming SIMD Extensions (Intel® SSE).
Expand Down
Loading