Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bump version to v0.8.0 #1009

Merged
merged 1 commit into from
Sep 7, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ endif ()
message(STATUS "CMAKE_INSTALL_PREFIX: ${CMAKE_INSTALL_PREFIX}")

cmake_minimum_required(VERSION 3.14)
project(MMDeploy VERSION 0.7.0)
project(MMDeploy VERSION 0.8.0)

set(CMAKE_CXX_STANDARD 17)

Expand Down
38 changes: 19 additions & 19 deletions docs/en/02-how-to-run/prebuilt_package_windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

______________________________________________________________________

This tutorial takes `mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1.zip` and `mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip` as examples to show how to use the prebuilt packages.
This tutorial takes `mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1.zip` and `mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip` as examples to show how to use the prebuilt packages.

The directory structure of the prebuilt package is as follows, where the `dist` folder is about model converter, and the `sdk` folder is related to model inference.

Expand Down Expand Up @@ -80,9 +80,9 @@ In order to use `ONNX Runtime` backend, you should also do the following steps.
5. Install `mmdeploy` (Model Converter) and `mmdeploy_python` (SDK Python API).

```bash
# download mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1.zip
pip install .\mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\dist\mmdeploy-0.7.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\sdk\python\mmdeploy_python-0.7.0-cp38-none-win_amd64.whl
# download mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1.zip
pip install .\mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1\dist\mmdeploy-0.8.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1\sdk\python\mmdeploy_python-0.8.0-cp38-none-win_amd64.whl
```

:point_right: If you have installed it before, please uninstall it first.
Expand All @@ -107,9 +107,9 @@ In order to use `TensorRT` backend, you should also do the following steps.
5. Install `mmdeploy` (Model Converter) and `mmdeploy_python` (SDK Python API).

```bash
# download mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip
pip install .\mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\dist\mmdeploy-0.7.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\python\mmdeploy_python-0.7.0-cp38-none-win_amd64.whl
# download mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip
pip install .\mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\dist\mmdeploy-0.8.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\python\mmdeploy_python-0.8.0-cp38-none-win_amd64.whl
```

:point_right: If you have installed it before, please uninstall it first.
Expand Down Expand Up @@ -138,7 +138,7 @@ After preparation work, the structure of the current working directory should be

```
..
|-- mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1
|-- mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1
|-- mmclassification
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -186,7 +186,7 @@ After installation of mmdeploy-tensorrt prebuilt package, the structure of the c

```
..
|-- mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmclassification
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -249,8 +249,8 @@ The structure of current working directory:

```
.
|-- mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1
|-- mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1
|-- mmclassification
|-- mmdeploy
|-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -311,15 +311,15 @@ The following describes how to use the SDK's C API for inference

1. Build examples

Under `mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\sdk\example` directory
Under `mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1\sdk\example` directory

```
// Path should be modified according to the actual location
mkdir build
cd build
cmake ..\cpp -A x64 -T v142 `
-DOpenCV_DIR=C:\Deps\opencv\build\x64\vc15\lib `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\sdk\lib\cmake\MMDeploy `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1\sdk\lib\cmake\MMDeploy `
-DONNXRUNTIME_DIR=C:\Deps\onnxruntime\onnxruntime-win-gpu-x64-1.8.1

cmake --build . --config Release
Expand All @@ -329,15 +329,15 @@ The following describes how to use the SDK's C API for inference

:point_right: The purpose is to make the exe find the relevant dll

If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\sdk\bin`) to the `PATH`.
If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1\sdk\bin`) to the `PATH`.

If choose to copy the dynamic libraries, copy the dll in the bin directory to the same level directory of the just compiled exe (build/Release).

3. Inference:

It is recommended to use `CMD` here.

Under `mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\\sdk\\example\\build\\Release` directory:
Under `mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1\\sdk\\example\\build\\Release` directory:

```
.\image_classification.exe cpu C:\workspace\work_dir\onnx\resnet\ C:\workspace\mmclassification\demo\demo.JPEG
Expand All @@ -347,15 +347,15 @@ The following describes how to use the SDK's C API for inference

1. Build examples

Under `mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example` directory
Under `mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example` directory

```
// Path should be modified according to the actual location
mkdir build
cd build
cmake ..\cpp -A x64 -T v142 `
-DOpenCV_DIR=C:\Deps\opencv\build\x64\vc15\lib `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8 2.3.0\sdk\lib\cmake\MMDeploy `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8 2.3.0\sdk\lib\cmake\MMDeploy `
-DTENSORRT_DIR=C:\Deps\tensorrt\TensorRT-8.2.3.0 `
-DCUDNN_DIR=C:\Deps\cudnn\8.2.1
cmake --build . --config Release
Expand All @@ -365,15 +365,15 @@ The following describes how to use the SDK's C API for inference

:point_right: The purpose is to make the exe find the relevant dll

If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\bin`) to the `PATH`.
If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\bin`) to the `PATH`.

If choose to copy the dynamic libraries, copy the dll in the bin directory to the same level directory of the just compiled exe (build/Release).

3. Inference

It is recommended to use `CMD` here.

Under `mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example\\build\\Release` directory
Under `mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example\\build\\Release` directory

```
.\image_classification.exe cuda C:\workspace\work_dir\trt\resnet C:\workspace\mmclassification\demo\demo.JPEG
Expand Down
22 changes: 11 additions & 11 deletions docs/en/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,11 +118,11 @@ Take the latest precompiled package as example, you can install it as follows:

```shell
# install MMDeploy
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.7.0/mmdeploy-0.7.0-linux-x86_64-onnxruntime1.8.1.tar.gz
tar -zxvf mmdeploy-0.7.0-linux-x86_64-onnxruntime1.8.1.tar.gz
cd mmdeploy-0.7.0-linux-x86_64-onnxruntime1.8.1
pip install dist/mmdeploy-0.7.0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-0.7.0-cp38-none-linux_x86_64.whl
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.8.0/mmdeploy-0.8.0-linux-x86_64-onnxruntime1.8.1.tar.gz
tar -zxvf mmdeploy-0.8.0-linux-x86_64-onnxruntime1.8.1.tar.gz
cd mmdeploy-0.8.0-linux-x86_64-onnxruntime1.8.1
pip install dist/mmdeploy-0.8.0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-0.8.0-cp38-none-linux_x86_64.whl
cd ..
# install inference engine: ONNX Runtime
pip install onnxruntime==1.8.1
Expand All @@ -139,11 +139,11 @@ export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH

```shell
# install MMDeploy
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.7.0/mmdeploy-0.7.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
tar -zxvf mmdeploy-v0.7.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
cd mmdeploy-0.7.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
pip install dist/mmdeploy-0.7.0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-0.7.0-cp38-none-linux_x86_64.whl
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.8.0/mmdeploy-0.8.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
tar -zxvf mmdeploy-v0.8.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
cd mmdeploy-0.8.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
pip install dist/mmdeploy-0.8.0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-0.8.0-cp38-none-linux_x86_64.whl
cd ..
# install inference engine: TensorRT
# !!! Download TensorRT-8.2.3.0 CUDA 11.x tar package from NVIDIA, and extract it to the current directory
Expand Down Expand Up @@ -232,7 +232,7 @@ result = inference_model(
You can directly run MMDeploy demo programs in the precompiled package to get inference results.

```shell
cd mmdeploy-0.7.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
cd mmdeploy-0.8.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
# run python demo
python sdk/example/python/object_detection.py cuda ../mmdeploy_model/faster-rcnn ../mmdetection/demo/demo.jpg
# run C/C++ demo
Expand Down
38 changes: 19 additions & 19 deletions docs/zh_cn/02-how-to-run/prebuilt_package_windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ ______________________________________________________________________

目前,`MMDeploy`在`Windows`平台下提供`TensorRT`以及`ONNX Runtime`两种预编译包,可以从[Releases](https://github.com/open-mmlab/mmdeploy/releases)获取。

本篇教程以`mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1.zip`和`mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip`为例,展示预编译包的使用方法。
本篇教程以`mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1.zip`和`mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip`为例,展示预编译包的使用方法。

为了方便使用者快速上手,本教程以分类模型(mmclassification)为例,展示两种预编译包的使用方法。

Expand Down Expand Up @@ -88,9 +88,9 @@ ______________________________________________________________________
5. 安装`mmdeploy`(模型转换)以及`mmdeploy_python`(模型推理Python API)的预编译包

```bash
# 先下载 mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1.zip
pip install .\mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\dist\mmdeploy-0.7.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\sdk\python\mmdeploy_python-0.7.0-cp38-none-win_amd64.whl
# 先下载 mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1.zip
pip install .\mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1\dist\mmdeploy-0.8.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1\sdk\python\mmdeploy_python-0.8.0-cp38-none-win_amd64.whl
```

:point_right: 如果之前安装过,需要先卸载后再安装。
Expand All @@ -115,9 +115,9 @@ ______________________________________________________________________
5. 安装`mmdeploy`(模型转换)以及`mmdeploy_python`(模型推理Python API)的预编译包

```bash
# 先下载 mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip
pip install .\mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\dist\mmdeploy-0.7.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\python\mmdeploy_python-0.7.0-cp38-none-win_amd64.whl
# 先下载 mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip
pip install .\mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\dist\mmdeploy-0.8.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\python\mmdeploy_python-0.8.0-cp38-none-win_amd64.whl
```

:point_right: 如果之前安装过,需要先卸载后再安装
Expand Down Expand Up @@ -146,7 +146,7 @@ ______________________________________________________________________

```
..
|-- mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1
|-- mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1
|-- mmclassification
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -194,7 +194,7 @@ export2SDK(deploy_cfg, model_cfg, work_dir, pth=model_checkpoint)

```
..
|-- mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmclassification
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -257,8 +257,8 @@ export2SDK(deploy_cfg, model_cfg, work_dir, pth=model_checkpoint)

```
.
|-- mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1
|-- mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1
|-- mmclassification
|-- mmdeploy
|-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -327,15 +327,15 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet

1. 编译 examples

在`mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\sdk\example`目录下
在`mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1\sdk\example`目录下

```
// 部分路径根据实际位置进行修改
mkdir build
cd build
cmake ..\cpp -A x64 -T v142 `
-DOpenCV_DIR=C:\Deps\opencv\build\x64\vc15\lib `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\sdk\lib\cmake\MMDeploy `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1\sdk\lib\cmake\MMDeploy `
-DONNXRUNTIME_DIR=C:\Deps\onnxruntime\onnxruntime-win-gpu-x64-1.8.1

cmake --build . --config Release
Expand All @@ -345,15 +345,15 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet

:point_right: 目的是使exe运行时可以正确找到相关dll

若选择添加环境变量,则将`mmdeploy`的运行时库路径(`mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\sdk\bin`)添加到PATH,可参考onnxruntime的添加过程。
若选择添加环境变量,则将`mmdeploy`的运行时库路径(`mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1\sdk\bin`)添加到PATH,可参考onnxruntime的添加过程。

若选择拷贝动态库,而将bin目录中的dll拷贝到刚才编译出的exe(build/Release)的同级目录下。

3. 推理:

这里建议使用cmd,这样如果exe运行时如果找不到相关的dll的话会有弹窗

在mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\\sdk\\example\\build\\Release目录下:
在mmdeploy-0.8.0-windows-amd64-onnxruntime1.8.1\\sdk\\example\\build\\Release目录下:

```
.\image_classification.exe cpu C:\workspace\work_dir\onnx\resnet\ C:\workspace\mmclassification\demo\demo.JPEG
Expand All @@ -363,15 +363,15 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet

1. 编译 examples

在mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example目录下
在mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example目录下

```
// 部分路径根据所在硬盘的位置进行修改
mkdir build
cd build
cmake ..\cpp -A x64 -T v142 `
-DOpenCV_DIR=C:\Deps\opencv\build\x64\vc15\lib `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8 2.3.0\sdk\lib\cmake\MMDeploy `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8 2.3.0\sdk\lib\cmake\MMDeploy `
-DTENSORRT_DIR=C:\Deps\tensorrt\TensorRT-8.2.3.0 `
-DCUDNN_DIR=C:\Deps\cudnn\8.2.1
cmake --build . --config Release
Expand All @@ -381,15 +381,15 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet

:point_right: 目的是使exe运行时可以正确找到相关dll

若选择添加环境变量,则将`mmdeploy`的运行时库路径(`mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\bin`)添加到PATH,可参考onnxruntime的添加过程。
若选择添加环境变量,则将`mmdeploy`的运行时库路径(`mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\bin`)添加到PATH,可参考onnxruntime的添加过程。

若选择拷贝动态库,而将bin目录中的dll拷贝到刚才编译出的exe(build/Release)的同级目录下。

3. 推理

这里建议使用cmd,这样如果exe运行时如果找不到相关的dll的话会有弹窗

在mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example\\build\\Release目录下:
在mmdeploy-0.8.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example\\build\\Release目录下:

```
.\image_classification.exe cuda C:\workspace\work_dir\trt\resnet C:\workspace\mmclassification\demo\demo.JPEG
Expand Down
22 changes: 11 additions & 11 deletions docs/zh_cn/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,11 +113,11 @@ mim install mmcv-full

```shell
# 安装 MMDeploy ONNX Runtime 自定义算子库和推理 SDK
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.7.0/mmdeploy-0.7.0-linux-x86_64-onnxruntime1.8.1.tar.gz
tar -zxvf mmdeploy-0.7.0-linux-x86_64-onnxruntime1.8.1.tar.gz
cd mmdeploy-0.7.0-linux-x86_64-onnxruntime1.8.1
pip install dist/mmdeploy-0.7.0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-0.7.0-cp38-none-linux_x86_64.whl
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.8.0/mmdeploy-0.8.0-linux-x86_64-onnxruntime1.8.1.tar.gz
tar -zxvf mmdeploy-0.8.0-linux-x86_64-onnxruntime1.8.1.tar.gz
cd mmdeploy-0.8.0-linux-x86_64-onnxruntime1.8.1
pip install dist/mmdeploy-0.8.0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-0.8.0-cp38-none-linux_x86_64.whl
cd ..
# 安装推理引擎 ONNX Runtime
pip install onnxruntime==1.8.1
Expand All @@ -134,11 +134,11 @@ export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH

```shell
# 安装 MMDeploy TensorRT 自定义算子库和推理 SDK
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.7.0/mmdeploy-0.7.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
tar -zxvf mmdeploy-v0.7.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
cd mmdeploy-0.7.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
pip install dist/mmdeploy-0.7.0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-0.7.0-cp38-none-linux_x86_64.whl
wget https://github.com/open-mmlab/mmdeploy/releases/download/v0.8.0/mmdeploy-0.8.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
tar -zxvf mmdeploy-v0.8.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
cd mmdeploy-0.8.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
pip install dist/mmdeploy-0.8.0-py3-none-linux_x86_64.whl
pip install sdk/python/mmdeploy_python-0.8.0-cp38-none-linux_x86_64.whl
cd ..
# 安装推理引擎 TensorRT
# !!! 从 NVIDIA 官网下载 TensorRT-8.2.3.0 CUDA 11.x 安装包并解压到当前目录
Expand Down Expand Up @@ -226,7 +226,7 @@ result = inference_model(
你可以直接运行预编译包中的 demo 程序,输入 SDK Model 和图像,进行推理,并查看推理结果。

```shell
cd mmdeploy-0.7.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
cd mmdeploy-0.8.0-linux-x86_64-cuda11.1-tensorrt8.2.3.0
# 运行 python demo
python sdk/example/python/object_detection.py cuda ../mmdeploy_model/faster-rcnn ../mmdetection/demo/demo.jpg
# 运行 C/C++ demo
Expand Down
2 changes: 1 addition & 1 deletion mmdeploy/version.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Copyright (c) OpenMMLab. All rights reserved.
from typing import Tuple

__version__ = '0.7.0'
__version__ = '0.8.0'
short_version = __version__


Expand Down
2 changes: 1 addition & 1 deletion tools/package_tools/packaging/mmdeploy_python/version.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
# Copyright (c) OpenMMLab. All rights reserved.
__version__ = '0.7.0'
__version__ = '0.8.0'