OpenVINO Model Server 2019 R1.1
In OpenVINO Model Server 2019 R1.1 there is introduced support for Inference Engine in version 2019 R1.1.
Refer to OpenVINO-Release Notes to learn more about introduced improvements. Most important enhancements are:
- alignment with Intel® Movidius™ Myriad™ X Development Kit R7 release.
- support for mPCIe and M.2 form factor versions of Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
- Myriad plugin is now available in open source
Release OpenVINO Model Server 2019 R1.1 brings also the following new features and changes:
- Added RESTful API - all implemented functions can be accessed using gRPC and REST interfaces according to TensorFlow Serving API. Check the client examples and Jupyter notebook to learn how to use the new interface.
- added exemplary kubeflow pipelines which demo OpenVINO Model Server deployment in Kubernetes and TensorFlow model optimization using Model Optimizer from OpenVINO Toolkit
- Added implementation of GetModelStatus function - it reports state of served models
- Model version update can be disabled by setting FILE_SYSTEM_POLL_WAIT_SECONDS to
0
or negative value. - Improved error handling for model loading issues like network problems or access permissions
- Updated versions of python dependencies
You can use a public docker image based on Intel python base image via a command:
docker pull intelaipg/openvino-model-server:2019.1.1