-
cmake
Make sure cmake version >= 3.14.0. The below script shows how to install cmake 3.20.0. You can find more versions here.
wget https://github.com/Kitware/CMake/releases/download/v3.20.0/cmake-3.20.0-linux-x86_64.tar.gz tar -xzvf cmake-3.20.0-linux-x86_64.tar.gz sudo ln -sf $(pwd)/cmake-3.20.0-linux-x86_64/bin/* /usr/bin/
-
GCC 7+
MMDeploy requires compilers that support C++17.
# Add repository if ubuntu < 18.04 sudo add-apt-repository ppa:ubuntu-toolchain-r/test sudo apt-get update sudo apt-get install gcc-7 sudo apt-get install g++-7
NAME | INSTALLATION |
---|---|
conda | Please install conda according to the official guide. Create a conda virtual environment and activate it.
|
PyTorch (>=1.8.0) |
Install PyTorch>=1.8.0 by following the official instructions. Be sure the CUDA version PyTorch requires matches that in your host.
|
mmcv-full | Install mmcv-full as follows. Refer to the guide for details.
|
You can skip this chapter if you are only interested in the model converter.
NAME | INSTALLATION |
---|---|
OpenCV (>=3.0) |
On Ubuntu >=18.04,
|
pplcv | A high-performance image processing library of openPPL. It is optional which only be needed if cuda platform is required.
Now, MMDeploy supports v0.6.2 and has to use git clone to download it.
|
Both MMDeploy's model converter and SDK share the same inference engines.
You can select you interested inference engines and do the installation by following the given commands.
NAME | PACKAGE | INSTALLATION |
---|---|---|
ONNXRuntime | onnxruntime (>=1.8.1) |
1. Install python package
|
TensorRT |
TensorRT |
1. Login NVIDIA and download the TensorRT tar file that matches the CPU architecture and CUDA version you are using from here. Follow the guide to install TensorRT. 2. Here is an example of installing TensorRT 8.2 GA Update 2 for Linux x86_64 and CUDA 11.x that you can refer to. First of all, click here to download CUDA 11.x TensorRT 8.2.3.0 and then install it and other dependency like below:
|
cuDNN |
1. Download cuDNN that matches the CPU architecture, CUDA version and TensorRT version you are using from cuDNN Archive. In the above TensorRT's installation example, it requires cudnn8.2. Thus, you can download CUDA 11.x cuDNN 8.2 2. Extract the compressed file and set the environment variables
|
|
PPL.NN | ppl.nn |
1. Please follow the guide to build ppl.nn and install pyppl .2. Export pplnn's root path to environment variable
|
OpenVINO | openvino | 1. Install OpenVINO package
|
ncnn | ncnn | 1. Download and build ncnn according to its wiki.
Make sure to enable -DNCNN_PYTHON=ON in your build command. 2. Export ncnn's root path to environment variable
|
TorchScript | libtorch |
1. Download libtorch from here. Please note that only Pre-cxx11 ABI and version 1.8.1+ on Linux platform are supported by now. For previous versions of libtorch, you can find them in the issue comment. 2. Take Libtorch1.8.1+cu111 as an example. You can install it like this:
|
Note:
If you want to make the above environment variables permanent, you could add them to ~/.bashrc
. Take the ONNXRuntime for example,
echo '# set env for onnxruntime' >> ~/.bashrc
echo "export ONNXRUNTIME_DIR=${ONNXRUNTIME_DIR}" >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc
cd /the/root/path/of/MMDeploy
export MMDEPLOY_DIR=$(pwd)
NAME | VALUE | DEFAULT | REMARK |
---|---|---|---|
MMDEPLOY_BUILD_SDK | {ON, OFF} | OFF | Switch to build MMDeploy SDK |
MMDEPLOY_BUILD_SDK_PYTHON_API | {ON, OFF} | OFF | switch to build MMDeploy SDK python package |
MMDEPLOY_BUILD_SDK_JAVA_API | {ON, OFF} | switch to build MMDeploy SDK Java API | |
MMDEPLOY_BUILD_TEST | {ON, OFF} | OFF | Switch to build MMDeploy SDK unittest cases |
MMDEPLOY_TARGET_DEVICES | {"cpu", "cuda"} | cpu | Enable target device. You can enable more by
passing a semicolon separated list of device names to MMDEPLOY_TARGET_DEVICES variable, e.g. -DMMDEPLOY_TARGET_DEVICES="cpu;cuda" |
MMDEPLOY_TARGET_BACKENDS | {"trt", "ort", "pplnn", "ncnn", "openvino", "torchscript"} | N/A | Enabling inference engine. By default, no target inference engine is set, since it highly depends on the use case. When more than one engine are specified, it has to be set with a semicolon separated list of inference backend names, e.g.
1. trt: TensorRT. TENSORRT_DIR and CUDNN_DIR are needed.
ONNXRUNTIME_DIR is needed.
pplnn_DIR is needed.
ncnn_DIR is needed.
InferenceEngine_DIR is needed.
Torch_DIR is needed.
|
MMDEPLOY_CODEBASES | {"mmcls", "mmdet", "mmseg", "mmedit", "mmocr", "all"} | all | Enable codebase's postprocess modules. You can provide a semicolon separated list of codebase names to enable them, e.g., -DMMDEPLOY_CODEBASES="mmcls;mmdet" . Or you can pass all to enable them all, i.e., -DMMDEPLOY_CODEBASES=all |
MMDEPLOY_SHARED_LIBS | {ON, OFF} | ON | Switch to build shared library or static library of MMDeploy SDK |
If one of inference engines among ONNXRuntime, TensorRT, ncnn and libtorch is selected, you have to build the corresponding custom ops.
-
ONNXRuntime Custom Ops
cd ${MMDEPLOY_DIR} mkdir -p build && cd build cmake -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} .. make -j$(nproc)
-
TensorRT Custom Ops
cd ${MMDEPLOY_DIR} mkdir -p build && cd build cmake -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_TARGET_BACKENDS=trt -DTENSORRT_DIR=${TENSORRT_DIR} -DCUDNN_DIR=${CUDNN_DIR} .. make -j$(nproc)
-
ncnn Custom Ops
cd ${MMDEPLOY_DIR} mkdir -p build && cd build cmake -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_TARGET_BACKENDS=ncnn -Dncnn_DIR=${NCNN_DIR}/build/install/lib/cmake/ncnn .. make -j$(nproc)
-
TorchScript Custom Ops
cd ${MMDEPLOY_DIR} mkdir -p build && cd build cmake -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_TARGET_BACKENDS=torchscript -DTorch_DIR=${Torch_DIR} .. make -j$(nproc)
cd ${MMDEPLOY_DIR}
pip install -e .
Note
- Some dependencies are optional. Simply running
pip install -e .
will only install the minimum runtime requirements. To use optional dependencies, install them manually withpip install -r requirements/optional.txt
or specify desired extras when callingpip
(e.g.pip install -e .[optional]
). Valid keys for the extras field are:all
,tests
,build
,optional
.
MMDeploy provides two recipes as shown below for building SDK with ONNXRuntime and TensorRT as inference engines respectively. You can also activate other engines after the model.
-
cpu + ONNXRuntime
cd ${MMDEPLOY_DIR} mkdir -p build && cd build cmake .. \ -DCMAKE_CXX_COMPILER=g++-7 \ -DMMDEPLOY_BUILD_SDK=ON \ -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \ -DMMDEPLOY_TARGET_DEVICES=cpu \ -DMMDEPLOY_TARGET_BACKENDS=ort \ -DMMDEPLOY_CODEBASES=all \ -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} make -j$(nproc) && make install
-
cuda + TensorRT
cd ${MMDEPLOY_DIR} mkdir -p build && cd build cmake .. \ -DCMAKE_CXX_COMPILER=g++-7 \ -DMMDEPLOY_BUILD_SDK=ON \ -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \ -DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \ -DMMDEPLOY_TARGET_BACKENDS=trt \ -DMMDEPLOY_CODEBASES=all \ -Dpplcv_DIR=${PPLCV_DIR}/cuda-build/install/lib/cmake/ppl \ -DTENSORRT_DIR=${TENSORRT_DIR} \ -DCUDNN_DIR=${CUDNN_DIR} make -j$(nproc) && make install
cd ${MMDEPLOY_DIR}/build/install/example
mkdir -p build && cd build
cmake .. -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy
make -j$(nproc)