Skip to content

Commit

Permalink
[DOCS] adjustments preparing 2025.0 pass 2
Browse files Browse the repository at this point in the history
Fixing references to OMZ models in Samples

remove cspell dict
  • Loading branch information
kblaszczak-intel committed Jan 16, 2025
1 parent 25cd6b0 commit 4f9cb57
Show file tree
Hide file tree
Showing 11 changed files with 53 additions and 510 deletions.
412 changes: 0 additions & 412 deletions cspell.json

This file was deleted.

93 changes: 24 additions & 69 deletions docs/articles_en/about-openvino/release-notes-openvino.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,9 @@ OpenVINO Release Notes
What's new
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

* OpenVINO 2024.6 release includes updates for enhanced stability and improved LLM performance.
* Introduced support for Intel® Arc™ B-Series Graphics (formerly known as Battlemage).
* Implemented optimizations to improve the inference time and LLM performance on NPUs.
* Improved LLM performance with GenAI API optimizations and bug fixes.
* .
* .




Expand All @@ -39,26 +38,19 @@ OpenVINO™ Runtime
CPU Device Plugin
-----------------------------------------------------------------------------------------------

* KV cache now uses asymmetric 8-bit unsigned integer (U8) as the default precision, reducing
memory stress for LLMs and increasing their performance. This option can be controlled by
model meta data.
* Quality and accuracy has been improved for selected models with several bug fixes.
* .
* .

GPU Device Plugin
-----------------------------------------------------------------------------------------------

* Device memory copy optimizations have been introduced for inference with **Intel® Arc™ B-Series
Graphics** (formerly known as Battlemage). Since it does not utilize L2 cache for copying memory
between the device and host, a dedicated `copy` operation is used, if inputs or results are
not expected in the device memory.
* ChatGLM4 inference on GPU has been optimized.
* .
* .

NPU Device Plugin
-----------------------------------------------------------------------------------------------

* LLM performance and inference time has been improved with memory optimizations.


* .



Expand Down Expand Up @@ -98,14 +90,10 @@ Previous 2025 releases
.. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
.. dropdown:: 2024.5 - 20 November 2024
.. dropdown:: 2024.6 - 18 December 2024
:animate: fade-in-slide-down
:color: secondary

**What's new**

* More GenAI coverage and framework integrations to minimize code changes.




Expand All @@ -126,74 +114,41 @@ page.



Discontinued in 2024
Discontinued in 2025
-----------------------------

* Runtime components:

* Intel® Gaussian & Neural Accelerator (Intel® GNA). Consider using the Neural Processing
Unit (NPU) for low-powered systems like Intel® Core™ Ultra or 14th generation and beyond.
* OpenVINO C++/C/Python 1.0 APIs (see
`2023.3 API transition guide <https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html>`__
for reference).
* All ONNX Frontend legacy API (known as ONNX_IMPORTER_API).
* ``PerfomanceMode.UNDEFINED`` property as part of the OpenVINO Python API.
* OpenVINO property Affinity API will is no longer available. It has been replaced with CPU
binding configurations (``ov::hint::enable_cpu_pinning``).

* Tools:

* Deployment Manager. See :doc:`installation <../get-started/install-openvino>` and
:doc:`deployment <../get-started/install-openvino>` guides for current distribution
options.
* `Accuracy Checker <https://github.com/openvinotoolkit/open_model_zoo/blob/master/tools/accuracy_checker/README.md>`__.
* `Post-Training Optimization Tool <https://docs.openvino.ai/2023.3/pot_introduction.html>`__
(POT). Neural Network Compression Framework (NNCF) should be used instead.
* A `Git patch <https://github.com/openvinotoolkit/nncf/tree/release_v281/third_party_integration/huggingface_transformers>`__
for NNCF integration with `huggingface/transformers <https://github.com/huggingface/transformers>`__.
The recommended approach is to use `huggingface/optimum-intel <https://github.com/huggingface/optimum-intel>`__
for applying NNCF optimization on top of models from Hugging Face.
* Support for Apache MXNet, Caffe, and Kaldi model formats. Conversion to ONNX may be used
as a solution.
* The macOS x86_64 debug bins are no longer provided with the OpenVINO toolkit, starting
with OpenVINO 2024.5.
* Python 3.8 is no longer supported, starting with OpenVINO 2024.5.

* As MxNet doesn't support Python version higher than 3.8, according to the
`MxNet PyPI project <https://pypi.org/project/mxnet/>`__,
it is no longer supported by OpenVINO, either.

* Discrete Keem Bay support is no longer supported, starting with OpenVINO 2024.5.
* Support for discrete devices (formerly codenamed Raptor Lake) is no longer available for
NPU.
* Intel® Streaming SIMD Extensions (Intel® SSE) are currently not enabled in the binary
package by default. They are still supported in the source code form.
* The OpenVINO™ Development Tools package (pip install openvino-dev) is no longer available
for OpenVINO releases in 2025.
* Model Optimizer is no longer avilable. Consider using the
:doc:`new conversion methods <../openvino-workflow/model-preparation/convert-model-to-ir>`
instead. For more details, see the
`model conversion transition guide <https://docs.openvino.ai/2024/documentation/legacy-features/transition-legacy-conversion-api.html>`__.


Deprecated and to be removed in the future
--------------------------------------------

* Intel® Streaming SIMD Extensions (Intel® SSE) will be supported in source code form, but not
enabled in the binary package by default, starting with OpenVINO 2025.0.
* Ubuntu 20.04 support will be deprecated in future OpenVINO releases due to the end of
standard support.
* The openvino-nightly PyPI module will soon be discontinued. End-users should proceed with the
Simple PyPI nightly repo instead. More information in
`Release Policy <https://docs.openvino.ai/2024/about-openvino/release-notes-openvino/release-policy.html#nightly-releases>`__.
* The OpenVINO™ Development Tools package (pip install openvino-dev) will be removed from
installation options and distribution channels beginning with OpenVINO 2025.0.
* Model Optimizer will be discontinued with OpenVINO 2025.0. Consider using the
:doc:`new conversion methods <../openvino-workflow/model-preparation/convert-model-to-ir>`
instead. For more details, see the
`model conversion transition guide <https://docs.openvino.ai/2024/documentation/legacy-features/transition-legacy-conversion-api.html>`__.
* OpenVINO property Affinity API will be discontinued with OpenVINO 2025.0.
It will be replaced with CPU binding configurations (``ov::hint::enable_cpu_pinning``).



* “auto shape” and “auto batch size” (reshaping a model in runtime) will be removed in the
future. OpenVINO's dynamic shape models are recommended instead.
* MacOS x86 is no longer recommended for use due to the discontinuation of validation.
Full support will be removed later in 2025.


* “auto shape” and “auto batch size” (reshaping a model in runtime) will be removed in the
future. OpenVINO's dynamic shape models are recommended instead.

* Starting with 2025.0 MacOS x86 is no longer recommended for use due to the discontinuation
of validation. Full support will be removed later in 2025.



Expand Down
6 changes: 3 additions & 3 deletions samples/cpp/benchmark/sync_benchmark/README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
# Sync Benchmark C++ Sample

This sample demonstrates how to estimate performance of a model using Synchronous Inference Request API. It makes sense to use synchronous inference only in latency oriented scenarios. Models with static input shapes are supported. Unlike [demos](https://docs.openvino.ai/2024/omz_demos.html) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options.
This sample demonstrates how to estimate performance of a model using Synchronous Inference Request API. It makes sense to use synchronous inference only in latency oriented scenarios. Models with static input shapes are supported. Unlike [demos](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options.

For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/sync-benchmark.html)

## Requirements

| Options | Values |
| -------------------------------| -------------------------------------------------------------------------------------------------------------------------|
| Validated Models | [yolo-v3-tf](https://docs.openvino.ai/2024/omz_models_model_yolo_v3_tf.html), |
| | [face-detection-0200](https://docs.openvino.ai/2024/omz_models_model_face_detection_0200.html) |
| Validated Models | [yolo-v3-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v3-tf), |
| | [face-detection-0200](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/face-detection-0200) |
| Model Format | OpenVINO™ toolkit Intermediate Representation |
| | (\*.xml + \*.bin), ONNX (\*.onnx) |
| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) |
Expand Down
6 changes: 3 additions & 3 deletions samples/cpp/benchmark/throughput_benchmark/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Throughput Benchmark C++ Sample

This sample demonstrates how to estimate performance of a model using Asynchronous Inference Request API in throughput mode. Unlike [demos](https://docs.openvino.ai/2024/omz_demos.html) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options.
This sample demonstrates how to estimate performance of a model using Asynchronous Inference Request API in throughput mode. Unlike [demos](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options.

The reported results may deviate from what [benchmark_app](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html) reports. One example is model input precision for computer vision tasks. benchmark_app sets ``uint8``, while the sample uses default model precision which is usually ``float32``.

Expand All @@ -10,8 +10,8 @@ For more detailed information on how this sample works, check the dedicated [art

| Options | Values |
| ----------------------------| -------------------------------------------------------------------------------------------------------------------------------|
| Validated Models | [yolo-v3-tf](https://docs.openvino.ai/2024/omz_models_model_yolo_v3_tf.html), |
| | [face-detection-](https://docs.openvino.ai/2024/omz_models_model_face_detection_0200.html) |
| Validated Models | [yolo-v3-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v3-tf), |
| | [face-detection-](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/face-detection-0200) |
| Model Format | OpenVINO™ toolkit Intermediate Representation |
| | (\*.xml + \*.bin), ONNX (\*.onnx) |
| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) |
Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/hello_reshape_ssd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ For more detailed information on how this sample works, check the dedicated [art

| Options | Values |
| ----------------------------| -----------------------------------------------------------------------------------------------------------------------------------------|
| Validated Models | [person-detection-retail-0013](https://docs.openvino.ai/2024/omz_models_model_person_detection_retail_0013.html) |
| Validated Models | [person-detection-retail-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-detection-retail-0013) |
| Model Format | OpenVINO™ toolkit Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) |
| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) |
| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-reshape-ssd.html) |
Expand Down
2 changes: 1 addition & 1 deletion samples/js/node/notebooks/hello-detection.nnb
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
{
"language": "markdown",
"source": [
"# Hello Object Detection\n\nA very basic introduction to using object detection models with OpenVINO™.\n\nThe [horizontal-text-detection-0001](https://docs.openvino.ai/2023.0/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects horizontal text in images and returns a blob of data in the shape of `[100, 5]`. Each detected text box is stored in the `[x_min, y_min, x_max, y_max, conf]` format, where the\n`(x_min, y_min)` are the coordinates of the top left bounding box corner, `(x_max, y_max)` are the coordinates of the bottom right bounding box corner and `conf` is the confidence for the predicted class."
"# Hello Object Detection\n\nA very basic introduction to using object detection models with OpenVINO™.\n\nThe [horizontal-text-detection-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/horizontal-text-detection-0001) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects horizontal text in images and returns a blob of data in the shape of `[100, 5]`. Each detected text box is stored in the `[x_min, y_min, x_max, y_max, conf]` format, where the\n`(x_min, y_min)` are the coordinates of the top left bounding box corner, `(x_max, y_max)` are the coordinates of the bottom right bounding box corner and `conf` is the confidence for the predicted class."
],
"outputs": []
},
Expand Down
2 changes: 1 addition & 1 deletion samples/js/node/notebooks/hello-segmentation.nnb
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
{
"language": "markdown",
"source": [
"# Hello Image Segmentation\n\nA very basic introduction to using segmentation models with OpenVINO™.\nIn this tutorial, a pre-trained [road-segmentation-adas-0001](https://docs.openvino.ai/2023.0/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.\n"
"# Hello Image Segmentation\n\nA very basic introduction to using segmentation models with OpenVINO™.\nIn this tutorial, a pre-trained [road-segmentation-adas-0001](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/road-segmentation-adas-0001) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.\n"
],
"outputs": []
},
Expand Down
2 changes: 1 addition & 1 deletion samples/js/node/notebooks/hello-world.nnb
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
{
"language": "markdown",
"source": [
"# Hello Image Classification\n\nThis basic introduction to OpenVINO™ shows how to do inference with an image classification model.\n\n A pre-trained [MobileNetV3 model](https://docs.openvino.ai/2023.0/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used in this tutorial. For more information about how OpenVINO IR models are created, refer to the [TensorFlow to OpenVINO](../tensorflow-to-openvino/tensorflow-to-openvino.ipynb) tutorial.\n "
"# Hello Image Classification\n\nThis basic introduction to OpenVINO™ shows how to do inference with an image classification model.\n\n A pre-trained [MobileNetV3 model](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v3-small-1.0-224-tf) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used in this tutorial. For more information about how OpenVINO IR models are created, refer to the [TensorFlow to OpenVINO](../tensorflow-to-openvino/tensorflow-to-openvino.ipynb) tutorial.\n "
],
"outputs": []
},
Expand Down
2 changes: 1 addition & 1 deletion samples/python/benchmark/bert_benchmark/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Bert Benchmark Python Sample

This sample demonstrates how to estimate performance of a Bert model using Asynchronous Inference Request API. Unlike [demos](https://docs.openvino.ai/2024/omz_demos.html) this sample doesn't have configurable command line arguments. Feel free to modify sample's source code to try out different options.
This sample demonstrates how to estimate performance of a Bert model using Asynchronous Inference Request API. Unlike [demos](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos) this sample doesn't have configurable command line arguments. Feel free to modify sample's source code to try out different options.

For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/bert-benchmark.html)

Expand Down
Loading

0 comments on commit 4f9cb57

Please sign in to comment.