Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensorrt model inference with mmdetection? #339

Closed
a227799770055 opened this issue Apr 13, 2022 · 6 comments
Closed

Tensorrt model inference with mmdetection? #339

a227799770055 opened this issue Apr 13, 2022 · 6 comments
Assignees

Comments

@a227799770055
Copy link

Hi guys,
I have a question about model inference.
After I successfully converted PyTorch model to tensorrt model. How can I inference image with the new model?
Does the project of MMDetection have api for us to use it? Or there have any tutorial can help me.

Thanks

@RunningLeon
Copy link
Collaborator

RunningLeon commented Apr 13, 2022

@a227799770055 Hi, you can refer to the tutorial: how_to_measure_performance_of_models.

@a227799770055
Copy link
Author

@RunningLeon Thanks for your answer.
However, when I run the test script encounter the following error.
How can I fix the problem?

File "tools/test.py", line 138, in <module> main() File "tools/test.py", line 130, in main outputs = task_processor.single_gpu_test(model, data_loader, args.show, File "/home/insign/Doc/insign/mmdeploy/mmdeploy/codebase/base/task.py", line 137, in single_gpu_test return self.codebase_class.single_gpu_test(model, data_loader, show, File "/home/insign/Doc/insign/mmdeploy/mmdeploy/codebase/mmdet/deploy/mmdetection.py", line 142, in single_gpu_test outputs = single_gpu_test(model, data_loader, show, out_dir, **kwargs) File "/home/insign/.local/lib/python3.8/site-packages/mmdet/apis/test.py", line 65, in single_gpu_test result = [(bbox_results, encode_mask_results(mask_results)) File "/home/insign/.local/lib/python3.8/site-packages/mmdet/apis/test.py", line 65, in <listcomp> result = [(bbox_results, encode_mask_results(mask_results)) File "/home/insign/.local/lib/python3.8/site-packages/mmdet/core/mask/utils.py", line 58, in encode_mask_results np.array( File "/home/insign/.local/lib/python3.8/site-packages/torch/_tensor.py", line 680, in __array__ return self.numpy().astype(dtype, copy=False) TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

@RunningLeon
Copy link
Collaborator

@a227799770055 Hi, could you post here what the script you ran and output of python tools/check_env.py here?

@a227799770055
Copy link
Author

a227799770055 commented Apr 19, 2022

@RunningLeon thanks for your reply.

Script
!python tools/test.py /home/insign/Doc/insign/mmdeploy/configs/mmdet/instance-seg/instance-seg_tensorrt-int8_static-800x1344.py /home/insign/Doc/insign/mmdeploy/nuclei_custom_config.py --model /home/insign/Doc/insign/mmdeploy/work_dir/end2end.engine --out out.pkl --device cuda:0

check_env
2022-04-19 11:30:58,732 - mmdeploy - INFO -

2022-04-19 11:30:58,732 - mmdeploy - INFO - Environmental information
2022-04-19 11:30:59,490 - mmdeploy - INFO - sys.platform: linux
2022-04-19 11:30:59,490 - mmdeploy - INFO - Python: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
2022-04-19 11:30:59,490 - mmdeploy - INFO - CUDA available: True
2022-04-19 11:30:59,490 - mmdeploy - INFO - GPU 0: NVIDIA GeForce RTX 3090
2022-04-19 11:30:59,490 - mmdeploy - INFO - CUDA_HOME: /usr/local/cuda-11.3
2022-04-19 11:30:59,490 - mmdeploy - INFO - NVCC: Build cuda_11.3.r11.3/compiler.29745058_0
2022-04-19 11:30:59,490 - mmdeploy - INFO - GCC: gcc (Ubuntu 7.5.0-6ubuntu2) 7.5.0
2022-04-19 11:30:59,490 - mmdeploy - INFO - PyTorch: 1.10.2+cu113
2022-04-19 11:30:59,490 - mmdeploy - INFO - PyTorch compiling details: PyTorch built with:

  • GCC 7.3
  • C++ Version: 201402
  • Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • LAPACK is enabled (usually provided by MKL)
  • NNPACK is enabled
  • CPU capability usage: AVX512
  • CUDA Runtime 11.3
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
  • CuDNN 8.2
  • Magma 2.5.2
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

2022-04-19 11:30:59,490 - mmdeploy - INFO - TorchVision: 0.11.3+cu113
2022-04-19 11:30:59,490 - mmdeploy - INFO - OpenCV: 4.5.5
2022-04-19 11:30:59,490 - mmdeploy - INFO - MMCV: 1.4.6
2022-04-19 11:30:59,490 - mmdeploy - INFO - MMCV Compiler: GCC 7.3
2022-04-19 11:30:59,490 - mmdeploy - INFO - MMCV CUDA Compiler: 11.3
2022-04-19 11:30:59,490 - mmdeploy - INFO - MMDeploy: 0.4.0+85c46ee
2022-04-19 11:30:59,490 - mmdeploy - INFO -

2022-04-19 11:30:59,490 - mmdeploy - INFO - Backend information
2022-04-19 11:30:59,646 - mmdeploy - INFO - onnxruntime: 1.10.0 ops_is_avaliable : True
2022-04-19 11:30:59,646 - mmdeploy - INFO - tensorrt: 8.0.1.6 ops_is_avaliable : True
2022-04-19 11:30:59,647 - mmdeploy - INFO - ncnn: None ops_is_avaliable : False
2022-04-19 11:30:59,647 - mmdeploy - INFO - pplnn_is_avaliable: False
2022-04-19 11:30:59,648 - mmdeploy - INFO - openvino_is_avaliable: False
2022-04-19 11:30:59,648 - mmdeploy - INFO -

2022-04-19 11:30:59,648 - mmdeploy - INFO - Codebase information
2022-04-19 11:30:59,649 - mmdeploy - INFO - mmdet: 2.22.0
2022-04-19 11:30:59,649 - mmdeploy - INFO - mmseg: None
2022-04-19 11:30:59,649 - mmdeploy - INFO - mmcls: 0.20.1
2022-04-19 11:30:59,649 - mmdeploy - INFO - mmocr: None
2022-04-19 11:30:59,649 - mmdeploy - INFO - mmedit: None
2022-04-19 11:30:59,649 - mmdeploy - INFO - mmdet3d: None
2022-04-19 11:30:59,649 - mmdeploy - INFO - mmpose: None

@RunningLeon
Copy link
Collaborator

@a227799770055 Hi, this has been fixed in #276 , pls. kindly use master branch or released v0.4.0.

@a227799770055
Copy link
Author

@RunningLeon Thanks for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants