Skip to content

Commit

Permalink
[Enhancement] Improve get_started documents and bump version to 0.7.0 (
Browse files Browse the repository at this point in the history
…open-mmlab#813)

* simplify commands in get_started

* add installation commands for Windows

* fix typo

* limit markdown and sphinx_markdown_tables version

* adopt html <details open> tag

* bump mmdeploy version

* bump mmdeploy version

* update get_started

* update get_started

* use python3.8 instead of python3.7

* remove duplicate section

* resolve issue open-mmlab#856

* update according to review results

* add reference to prebuilt_package_windows.md

* fix error when build sdk demos
  • Loading branch information
lvhan028 authored Aug 4, 2022
1 parent ef56036 commit 83b11bc
Show file tree
Hide file tree
Showing 18 changed files with 419 additions and 520 deletions.
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ endif ()
message(STATUS "CMAKE_INSTALL_PREFIX: ${CMAKE_INSTALL_PREFIX}")

cmake_minimum_required(VERSION 3.14)
project(MMDeploy VERSION 0.6.0)
project(MMDeploy VERSION 0.7.0)

set(CMAKE_CXX_STANDARD 17)

Expand Down
7 changes: 2 additions & 5 deletions docker/CPU/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@ RUN git clone https://github.com/open-mmlab/mmdeploy.git &&\
ENV LD_LIBRARY_PATH="/root/workspace/mmdeploy/build/lib:/opt/intel/openvino/deployment_tools/ngraph/lib:/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64:${LD_LIBRARY_PATH}"
RUN cd mmdeploy && rm -rf build/CM* && mkdir -p build && cd build && cmake .. \
-DMMDEPLOY_BUILD_SDK=ON \
-DMMDEPLOY_BUILD_EXAMPLES=ON \
-DCMAKE_CXX_COMPILER=g++-7 \
-DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \
-Dncnn_DIR=/root/workspace/ncnn/build/install/lib/cmake/ncnn \
Expand All @@ -102,9 +103,5 @@ RUN cd mmdeploy && rm -rf build/CM* && mkdir -p build && cd build && cmake .. \
-DMMDEPLOY_TARGET_BACKENDS="ort;ncnn;openvino" \
-DMMDEPLOY_CODEBASES=all &&\
cmake --build . -- -j$(nproc) && cmake --install . &&\
cd install/example && mkdir -p build && cd build &&\
cmake .. -DMMDeploy_DIR=/root/workspace/mmdeploy/build/install/lib/cmake/MMDeploy \
-DInferenceEngine_DIR=/opt/intel/openvino/deployment_tools/inference_engine/share \
-Dncnn_DIR=/root/workspace/ncnn/build/install/lib/cmake/ncnn &&\
cmake --build . && export SPDLOG_LEVEL=warn &&\
export SPDLOG_LEVEL=warn &&\
if [ -z ${VERSION} ] ; then echo "Built MMDeploy master for CPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for CPU devices successfully!" ; fi
5 changes: 2 additions & 3 deletions docker/GPU/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ RUN cd /root/workspace/mmdeploy &&\
mkdir -p build && cd build &&\
cmake .. \
-DMMDEPLOY_BUILD_SDK=ON \
-DMMDEPLOY_BUILD_EXAMPLES=ON \
-DCMAKE_CXX_COMPILER=g++ \
-Dpplcv_DIR=/root/workspace/ppl.cv/cuda-build/install/lib/cmake/ppl \
-DTENSORRT_DIR=${TENSORRT_DIR} \
Expand All @@ -88,9 +89,7 @@ RUN cd /root/workspace/mmdeploy &&\
-DMMDEPLOY_TARGET_BACKENDS="ort;trt" \
-DMMDEPLOY_CODEBASES=all &&\
make -j$(nproc) && make install &&\
cd install/example && mkdir -p build && cd build &&\
cmake -DMMDeploy_DIR=/root/workspace/mmdeploy/build/install/lib/cmake/MMDeploy .. &&\
make -j$(nproc) && export SPDLOG_LEVEL=warn &&\
export SPDLOG_LEVEL=warn &&\
if [ -z ${VERSION} ] ; then echo "Built MMDeploy master for GPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for GPU devices successfully!" ; fi

ENV LD_LIBRARY_PATH="/root/workspace/mmdeploy/build/lib:${BACKUP_LD_LIBRARY_PATH}"
23 changes: 3 additions & 20 deletions docs/en/01-how-to-build/android.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,7 @@
- [Install Dependencies for SDK](#install-dependencies-for-sdk)
- [Build MMDeploy](#build-mmdeploy)
- [Build Options Spec](#build-options-spec)
- [Build SDK](#build-sdk)
- [Build Demo](#build-demo)
- [Build SDK and Demos](#build-sdk-and-demos)

______________________________________________________________________

Expand Down Expand Up @@ -174,7 +173,7 @@ make -j$(nproc) install
</tbody>
</table>

#### Build SDK
#### Build SDK and Demos

MMDeploy provides a recipe as shown below for building SDK with ncnn as inference engine for android.

Expand All @@ -186,6 +185,7 @@ MMDeploy provides a recipe as shown below for building SDK with ncnn as inferenc
cmake .. \
-DMMDEPLOY_BUILD_SDK=ON \
-DMMDEPLOY_BUILD_SDK_JAVA_API=ON \
-DMMDEPLOY_BUILD_EXAMPLES=ON \
-DOpenCV_DIR=${OPENCV_ANDROID_SDK_DIR}/sdk/native/jni/abi-${ANDROID_ABI} \
-Dncnn_DIR=${NCNN_DIR}/build_${ANDROID_ABI}/install/lib/cmake/ncnn \
-DMMDEPLOY_TARGET_BACKENDS=ncnn \
Expand All @@ -198,20 +198,3 @@ MMDeploy provides a recipe as shown below for building SDK with ncnn as inferenc

make -j$(nproc) && make install
```

#### Build Demo

```Bash
export ANDROID_ABI=arm64-v8a

cd ${MMDEPLOY_DIR}/build_${ANDROID_ABI}/install/example
mkdir -p build && cd build
cmake .. \
-DOpenCV_DIR=${OPENCV_ANDROID_SDK_DIR}/sdk/native/jni/abi-${ANDROID_ABI} \
-Dncnn_DIR=${NCNN_DIR}/build_${ANDROID_ABI}/install/lib/cmake/ncnn \
-DMMDeploy_DIR=${MMDEPLOY_DIR}/build_${ANDROID_ABI}/install/lib/cmake/MMDeploy \
-DCMAKE_TOOLCHAIN_FILE=${NDK_PATH}/build/cmake/android.toolchain.cmake \
-DANDROID_ABI=${ANDROID_ABI} \
-DANDROID_PLATFORM=android-30
make -j$(nproc)
```
12 changes: 2 additions & 10 deletions docs/en/01-how-to-build/jetsons.md
Original file line number Diff line number Diff line change
Expand Up @@ -251,13 +251,14 @@ It takes about 5 minutes to install model converter on a Jetson Nano. So, please

### Install C/C++ Inference SDK

1. Build SDK Libraries
Build SDK Libraries and its demo as below:

```shell
mkdir -p build && cd build
cmake .. \
-DMMDEPLOY_BUILD_SDK=ON \
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \
-DMMDEPLOY_BUILD_EXAMPLES=ON \
-DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \
-DMMDEPLOY_TARGET_BACKENDS="trt" \
-DMMDEPLOY_CODEBASES=all \
Expand All @@ -269,15 +270,6 @@ make -j$(nproc) && make install
It takes about 9 minutes to build SDK libraries on a Jetson Nano. So, please be patient until the installation is complete.
```

2. Build SDK demos

```shell
cd ${MMDEPLOY_DIR}/build/install/example
mkdir -p build && cd build
cmake .. -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy
make -j$(nproc)
```

### Run a Demo

#### Object Detection demo
Expand Down
16 changes: 4 additions & 12 deletions docs/en/01-how-to-build/linux-x86_64.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,7 @@
- [Build Model Converter](#build-model-converter)
- [Build Custom Ops](#build-custom-ops)
- [Install Model Converter](#install-model-converter)
- [Build SDK](#build-sdk)
- [Build Demo](#build-demo)
- [Build SDK and Demo](#build-sdk-and-demo)

______________________________________________________________________

Expand Down Expand Up @@ -395,7 +394,7 @@ pip install -e .
To use optional dependencies, install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -e .[optional]`).
Valid keys for the extras field are: `all`, `tests`, `build`, `optional`.

### Build SDK
### Build SDK and Demo

MMDeploy provides two recipes as shown below for building SDK with ONNXRuntime and TensorRT as inference engines respectively.
You can also activate other engines after the model.
Expand All @@ -409,6 +408,7 @@ You can also activate other engines after the model.
-DCMAKE_CXX_COMPILER=g++-7 \
-DMMDEPLOY_BUILD_SDK=ON \
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \
-DMMDEPLOY_BUILD_EXAMPLES=ON \
-DMMDEPLOY_TARGET_DEVICES=cpu \
-DMMDEPLOY_TARGET_BACKENDS=ort \
-DMMDEPLOY_CODEBASES=all \
Expand All @@ -426,6 +426,7 @@ You can also activate other engines after the model.
-DCMAKE_CXX_COMPILER=g++-7 \
-DMMDEPLOY_BUILD_SDK=ON \
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \
-DMMDEPLOY_BUILD_EXAMPLES=ON \
-DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \
-DMMDEPLOY_TARGET_BACKENDS=trt \
-DMMDEPLOY_CODEBASES=all \
Expand All @@ -435,12 +436,3 @@ You can also activate other engines after the model.

make -j$(nproc) && make install
```

### Build Demo

```Bash
cd ${MMDEPLOY_DIR}/build/install/example
mkdir -p build && cd build
cmake .. -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy
make -j$(nproc)
```
25 changes: 6 additions & 19 deletions docs/en/01-how-to-build/windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,11 @@
- [Build Model Converter](#build-model-converter)
- [Build Custom Ops](#build-custom-ops)
- [Install Model Converter](#install-model-converter)
- [Build SDK](#build-sdk)
- [Build Demo](#build-demo)
- [Build SDK and Demos](#build-sdk-and-demos)
- [Note](#note)

______________________________________________________________________

Currently, MMDeploy only provides build-from-source method for windows platform. Prebuilt package will be released in the future.

## Build From Source

All the commands listed in the following chapters are verified on **Windows 10**.
Expand Down Expand Up @@ -315,7 +312,7 @@ pip install -e .
To use optional dependencies, install them manually with `pip install -r requirements/optional.txt` or specify desired extras when calling `pip` (e.g. `pip install -e .[optional]`).
Valid keys for the extras field are: `all`, `tests`, `build`, `optional`.

#### Build SDK
#### Build SDK and Demos

MMDeploy provides two recipes as shown below for building SDK with ONNXRuntime and TensorRT as inference engines respectively.
You can also activate other engines after the model.
Expand All @@ -328,6 +325,8 @@ You can also activate other engines after the model.
cd build
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
-DMMDEPLOY_BUILD_SDK=ON `
-DMMDEPLOY_BUILD_EXAMPLES=ON `
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON `
-DMMDEPLOY_TARGET_DEVICES="cpu" `
-DMMDEPLOY_TARGET_BACKENDS="ort" `
-DMMDEPLOY_CODEBASES="all" `
Expand All @@ -345,6 +344,8 @@ You can also activate other engines after the model.
cd build
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
-DMMDEPLOY_BUILD_SDK=ON `
-DMMDEPLOY_BUILD_EXAMPLES=ON `
-DMMDEPLOY_BUILD_SDK_PYTHON_API=ON `
-DMMDEPLOY_TARGET_DEVICES="cuda" `
-DMMDEPLOY_TARGET_BACKENDS="trt" `
-DMMDEPLOY_CODEBASES="all" `
Expand All @@ -356,20 +357,6 @@ You can also activate other engines after the model.
cmake --install . --config Release
```

#### Build Demo

```PowerShell
cd $env:MMDEPLOY_DIR\build\install\example
mkdir build -ErrorAction SilentlyContinue
cd build
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
-DMMDeploy_DIR="$env:MMDEPLOY_DIR/build/install/lib/cmake/MMDeploy"
cmake --build . --config Release -- /m
$env:path = "$env:MMDEPLOY_DIR/build/install/bin;" + $env:path
```

### Note

1. Release / Debug libraries can not be mixed. If MMDeploy is built with Release mode, all its dependent thirdparty libraries have to be built in Release mode too and vice versa.
42 changes: 21 additions & 21 deletions docs/en/02-how-to-run/prebuilt_package_windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

______________________________________________________________________

This tutorial takes `mmdeploy-0.6.0-windows-amd64-onnxruntime1.8.1.zip` and `mmdeploy-0.6.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip` as examples to show how to use the prebuilt packages.
This tutorial takes `mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1.zip` and `mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip` as examples to show how to use the prebuilt packages.

The directory structure of the prebuilt package is as follows, where the `dist` folder is about model converter, and the `sdk` folder is related to model inference.

Expand Down Expand Up @@ -80,9 +80,9 @@ In order to use `ONNX Runtime` backend, you should also do the following steps.
5. Install `mmdeploy` (Model Converter) and `mmdeploy_python` (SDK Python API).

```bash
# download mmdeploy-0.6.0-windows-amd64-onnxruntime1.8.1.zip
pip install .\mmdeploy-0.6.0-windows-amd64-onnxruntime1.8.1\dist\mmdeploy-0.6.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.6.0-windows-amd64-onnxruntime1.8.1\sdk\python\mmdeploy_python-0.6.0-cp38-none-win_amd64.whl
# download mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1.zip
pip install .\mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\dist\mmdeploy-0.7.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\sdk\python\mmdeploy_python-0.7.0-cp38-none-win_amd64.whl
```

:point_right: If you have installed it before, please uninstall it first.
Expand All @@ -107,9 +107,9 @@ In order to use `TensorRT` backend, you should also do the following steps.
5. Install `mmdeploy` (Model Converter) and `mmdeploy_python` (SDK Python API).

```bash
# download mmdeploy-0.6.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip
pip install .\mmdeploy-0.6.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\dist\mmdeploy-0.6.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.6.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\python\mmdeploy_python-0.6.0-cp38-none-win_amd64.whl
# download mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip
pip install .\mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\dist\mmdeploy-0.7.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\python\mmdeploy_python-0.7.0-cp38-none-win_amd64.whl
```

:point_right: If you have installed it before, please uninstall it first.
Expand Down Expand Up @@ -138,7 +138,7 @@ After preparation work, the structure of the current working directory should be

```
..
|-- mmdeploy-0.6.0-windows-amd64-onnxruntime1.8.1
|-- mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1
|-- mmclassification
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -186,7 +186,7 @@ After installation of mmdeploy-tensorrt prebuilt package, the structure of the c

```
..
|-- mmdeploy-0.6.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmclassification
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -249,8 +249,8 @@ The structure of current working directory:

```
.
|-- mmdeploy-0.6.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-0.6.0-windows-amd64-onnxruntime1.8.1
|-- mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1
|-- mmclassification
|-- mmdeploy
|-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -294,13 +294,13 @@ The following describes how to use the SDK's Python API for inference
#### ONNXRuntime

```bash
python .\mmdeploy\demo\python\image_classification.py .\work_dir\onnx\resnet\ .\mmclassification\demo\demo.JPEG
python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet\ .\mmclassification\demo\demo.JPEG
```

#### TensorRT

```
python .\mmdeploy\demo\python\image_classification.py .\work_dir\trt\resnet\ .\mmclassification\demo\demo.JPEG --device-name cuda
python .\mmdeploy\demo\python\image_classification.py cuda .\work_dir\trt\resnet\ .\mmclassification\demo\demo.JPEG
```

### C SDK
Expand All @@ -311,15 +311,15 @@ The following describes how to use the SDK's C API for inference

1. Build examples

Under `mmdeploy-0.6.0-windows-amd64-onnxruntime1.8.1\sdk\example` directory
Under `mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\sdk\example` directory

```
// Path should be modified according to the actual location
mkdir build
cd build
cmake .. -A x64 -T v142 `
-DOpenCV_DIR=C:\Deps\opencv\build\x64\vc15\lib `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.6.0-windows-amd64-onnxruntime1.8.1\sdk\lib\cmake\MMDeploy `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\sdk\lib\cmake\MMDeploy `
-DONNXRUNTIME_DIR=C:\Deps\onnxruntime\onnxruntime-win-gpu-x64-1.8.1
cmake --build . --config Release
Expand All @@ -329,15 +329,15 @@ The following describes how to use the SDK's C API for inference

:point_right: The purpose is to make the exe find the relevant dll

If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-0.6.0-windows-amd64-onnxruntime1.8.1\sdk\bin`) to the `PATH`.
If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\sdk\bin`) to the `PATH`.

If choose to copy the dynamic libraries, copy the dll in the bin directory to the same level directory of the just compiled exe (build/Release).

3. Inference:

It is recommended to use `CMD` here.

Under `mmdeploy-0.6.0-windows-amd64-onnxruntime1.8.1\\sdk\\example\\build\\Release` directory:
Under `mmdeploy-0.7.0-windows-amd64-onnxruntime1.8.1\\sdk\\example\\build\\Release` directory:

```
.\image_classification.exe cpu C:\workspace\work_dir\onnx\resnet\ C:\workspace\mmclassification\demo\demo.JPEG
Expand All @@ -347,15 +347,15 @@ The following describes how to use the SDK's C API for inference

1. Build examples

Under `mmdeploy-0.6.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example` directory
Under `mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example` directory

```
// Path should be modified according to the actual location
mkdir build
cd build
cmake .. -A x64 -T v142 `
-DOpenCV_DIR=C:\Deps\opencv\build\x64\vc15\lib `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.6.0-windows-amd64-cuda11.1-tensorrt8 2.3.0\sdk\lib\cmake\MMDeploy `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8 2.3.0\sdk\lib\cmake\MMDeploy `
-DTENSORRT_DIR=C:\Deps\tensorrt\TensorRT-8.2.3.0 `
-DCUDNN_DIR=C:\Deps\cudnn\8.2.1
cmake --build . --config Release
Expand All @@ -365,15 +365,15 @@ The following describes how to use the SDK's C API for inference

:point_right: The purpose is to make the exe find the relevant dll

If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-0.6.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\bin`) to the `PATH`.
If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\bin`) to the `PATH`.

If choose to copy the dynamic libraries, copy the dll in the bin directory to the same level directory of the just compiled exe (build/Release).

3. Inference

It is recommended to use `CMD` here.

Under `mmdeploy-0.6.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example\\build\\Release` directory
Under `mmdeploy-0.7.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example\\build\\Release` directory

```
.\image_classification.exe cuda C:\workspace\work_dir\trt\resnet C:\workspace\mmclassification\demo\demo.JPEG
Expand Down
Loading

0 comments on commit 83b11bc

Please sign in to comment.