🛠Lite.Ai.ToolKit: 一个轻量级的C++
AI模型工具箱,用户友好(还行吧),开箱即用。已经包括 100+ 流行的开源模型。这是一个根据个人兴趣整理的C++工具箱,, 涵盖目标检测、人脸检测、人脸识别、语义分割、抠图等领域。详见 Model Zoo 和 ONNX Hub 、MNN Hub 、TNN Hub 、NCNN Hub. [若是有用,❤️不妨给个⭐️🌟支持一下吧,感谢支持~]
English | 中文文档 | MacOS | Linux | Windows
- 用户友好,开箱即用。 使用简单一致的调用语法,如lite::cv::Type::Class,详见examples.
- 少量依赖,构建容易。 目前, 默认只依赖 OpenCV 和 ONNXRuntime,详见build。
- 众多的算法模块,且持续更新。 目前,包括将近 300+ C++实现以及 500+ 权重文件
如果您在自己的项目中使用了Lite.Ai.ToolKit,可考虑按以下方式进行引用。
@misc{lite.ai.toolkit2021,
title={lite.ai.toolkit: A lite C++ toolkit of awesome AI models.},
url={https://github.com/DefTruth/lite.ai.toolkit},
note={Open-source software available at https://github.com/DefTruth/lite.ai.toolkit},
author={Yan Jun},
year={2021}
}
一个用于人脸关键点检测的训练和评估的工具箱已经开源,可通过pip一键安装,地址在torchlm.
目前,有一些预编译的MacOS(x64)和Linux(x64)下的lite.ai.toolkit动态库,可以直接从以下链接进行下载。Windows(x64)和Android下的预编译库,也会在最近发布出来。更多详情请参考issues#48 . 更多可下载的的预编译库,请跳转到releases 查看。
- lite0.1.1-osx10.15.x-ocv4.5.2-ffmpeg4.2.2-onnxruntime1.8.1.zip
- lite0.1.1-osx10.15.x-ocv4.5.2-ffmpeg4.2.2-onnxruntime1.9.0.zip
- lite0.1.1-osx10.15.x-ocv4.5.2-ffmpeg4.2.2-onnxruntime1.10.0.zip
- lite0.1.1-ubuntu18.04-ocv4.5.2-ffmpeg4.2.2-onnxruntime1.8.1.zip
- lite0.1.1-ubuntu18.04-ocv4.5.2-ffmpeg4.2.2-onnxruntime1.9.0.zip
- lite0.1.1-ubuntu18.04-ocv4.5.2-ffmpeg4.2.2-onnxruntime1.10.0.zip
在Linux下,为了链接到预编译库,使用前,需要先将lite.ai.toolkit/lib
的路径添加到LD_LIBRARY_PATH.
export LD_LIBRARY_PATH=YOUR-PATH-TO/lite.ai.toolkit/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=YOUR-PATH-TO/lite.ai.toolkit/lib:$LIBRARY_PATH # (may need)
可以参考以下的CMakeLists.txt,快速配置lite.ai.toolkit。👇👀
set(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)
include_directories(${LITE_AI_DIR}/include)
link_directories(${LITE_AI_DIR}/lib})
set(TOOLKIT_LIBS lite.ai.toolkit onnxruntime)
set(OpenCV_LIBS opencv_core opencv_imgcodecs opencv_imgproc opencv_video opencv_videoio)
add_executable(lite_yolov5 examples/test_lite_yolov5.cpp)
target_link_libraries(lite_yolov5 ${TOOLKIT_LIBS} ${OpenCV_LIBS})
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";
auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
std::vector<lite::types::Boxf> detected_boxes;
cv::Mat img_bgr = cv::imread(test_img_path);
yolov5->detect(img_bgr, detected_boxes);
lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
cv::imwrite(save_img_path, img_bgr);
delete yolov5;
}
Click here to see details of Important Updates!
Date | Model | C++ | Paper | Code | Awesome | Type |
---|---|---|---|---|---|---|
【2022/04/03】 | MODNet | link | AAAI 2022 | code | matting | |
【2022/03/23】 | PIPNtet | link | CVPR 2021 | code | face::align | |
【2022/01/19】 | YOLO5Face | link | arXiv 2021 | code | face::detect | |
【2022/01/07】 | SCRFD | link | CVPR 2021 | code | face::detect | |
【2021/12/27】 | NanoDetPlus | link | blog | code | detection | |
【2021/12/08】 | MGMatting | link | CVPR 2021 | code | matting | |
【2021/11/11】 | YoloV5_V_6_0 | link | doi | code | detection | |
【2021/10/26】 | YoloX_V_0_1_1 | link | arXiv 2021 | code | detection | |
【2021/10/02】 | NanoDet | link | blog | code | detection | |
【2021/09/20】 | RobustVideoMatting | link | WACV 2022 | code | matting | |
【2021/09/02】 | YOLOP | link | arXiv 2021 | code | detection |
- / = 暂不支持.
- ✅ = 可以运行,且官方支持.
- ✔️ = 可以运行,但非官方支持.
- ❔ = 计划中,但不会很快实现,也许几个月后.
Class | Size | Type | Demo | ONNXRuntime | MNN | NCNN | TNN | MacOS | Linux | Windows | Android |
---|---|---|---|---|---|---|---|---|---|---|---|
YoloV5 | 28M | detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
YoloV3 | 236M | detection | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
TinyYoloV3 | 33M | detection | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
YoloV4 | 176M | detection | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
SSD | 76M | detection | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
SSDMobileNetV1 | 27M | detection | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
YoloX | 3.5M | detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
TinyYoloV4VOC | 22M | detection | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
TinyYoloV4COCO | 22M | detection | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
YoloR | 39M | detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
ScaledYoloV4 | 270M | detection | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
EfficientDet | 15M | detection | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
EfficientDetD7 | 220M | detection | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
EfficientDetD8 | 322M | detection | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
YOLOP | 30M | detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
NanoDet | 1.1M | detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
NanoDetPlus | 4.5M | detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
NanoDetEffi... | 12M | detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
YoloX_V_0_1_1 | 3.5M | detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
YoloV5_V_6_0 | 7.5M | detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
GlintArcFace | 92M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
GlintCosFace | 92M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
GlintPartialFC | 170M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
FaceNet | 89M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
FocalArcFace | 166M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
FocalAsiaArcFace | 166M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
TencentCurricularFace | 249M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
TencentCifpFace | 130M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
CenterLossFace | 280M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
SphereFace | 80M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
PoseRobustFace | 92M | faceid | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
NaivePoseRobustFace | 43M | faceid | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
MobileFaceNet | 3.8M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
CavaGhostArcFace | 15M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
CavaCombinedFace | 250M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
MobileSEFocalFace | 4.5M | faceid | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
RobustVideoMatting | 14M | matting | demo | ✅ | ✅ | / | ✅ | ✅ | ✔️ | ✔️ | ❔ |
MGMatting | 113M | matting | demo | ✅ | ✅ | / | ✅ | ✅ | ✔️ | ✔️ | / |
MODNet | 24M | matting | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
MODNetDyn | 24M | matting | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
BackgroundMattingV2 | 20M | matting | demo | ✅ | ✅ | / | ✅ | ✅ | ✔️ | ✔️ | / |
BackgroundMattingV2Dyn | 20M | matting | demo | ✅ | / | / | / | ✅ | ✔️ | ✔️ | / |
UltraFace | 1.1M | face::detect | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
RetinaFace | 1.6M | face::detect | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
FaceBoxes | 3.8M | face::detect | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
FaceBoxesV2 | 3.8M | face::detect | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
SCRFD | 2.5M | face::detect | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
YOLO5Face | 4.8M | face::detect | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
PFLD | 1.0M | face::align | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
PFLD98 | 4.8M | face::align | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
MobileNetV268 | 9.4M | face::align | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
MobileNetV2SE68 | 11M | face::align | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
PFLD68 | 2.8M | face::align | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
FaceLandmark1000 | 2.0M | face::align | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
PIPNet98 | 44.0M | face::align | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
PIPNet68 | 44.0M | face::align | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
PIPNet29 | 44.0M | face::align | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
PIPNet19 | 44.0M | face::align | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
FSANet | 1.2M | face::pose | demo | ✅ | ✅ | / | ✅ | ✅ | ✔️ | ✔️ | ❔ |
AgeGoogleNet | 23M | face::attr | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
GenderGoogleNet | 23M | face::attr | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
EmotionFerPlus | 33M | face::attr | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
VGG16Age | 514M | face::attr | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
VGG16Gender | 512M | face::attr | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
SSRNet | 190K | face::attr | demo | ✅ | ✅ | / | ✅ | ✅ | ✔️ | ✔️ | ❔ |
EfficientEmotion7 | 15M | face::attr | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
EfficientEmotion8 | 15M | face::attr | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
MobileEmotion7 | 13M | face::attr | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
ReXNetEmotion7 | 30M | face::attr | demo | ✅ | ✅ | / | ✅ | ✅ | ✔️ | ✔️ | / |
EfficientNetLite4 | 49M | classification | demo | ✅ | ✅ | / | ✅ | ✅ | ✔️ | ✔️ | / |
ShuffleNetV2 | 8.7M | classification | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
DenseNet121 | 30.7M | classification | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
GhostNet | 20M | classification | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
HdrDNet | 13M | classification | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
IBNNet | 97M | classification | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
MobileNetV2 | 13M | classification | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
ResNet | 44M | classification | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
ResNeXt | 95M | classification | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
DeepLabV3ResNet101 | 232M | segmentation | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
FCNResNet101 | 207M | segmentation | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | / |
FastStyleTransfer | 6.4M | style | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
Colorizer | 123M | colorization | demo | ✅ | ✅ | / | ✅ | ✅ | ✔️ | ✔️ | / |
SubPixelCNN | 234K | resolution | demo | ✅ | ✅ | / | ✅ | ✅ | ✔️ | ✔️ | ❔ |
SubPixelCNN | 234K | resolution | demo | ✅ | ✅ | / | ✅ | ✅ | ✔️ | ✔️ | ❔ |
InsectDet | 27M | detection | demo | ✅ | ✅ | / | ✅ | ✅ | ✔️ | ✔️ | ❔ |
InsectID | 22M | classification | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ |
PlantID | 30M | classification | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ |
YOLOv5BlazeFace | 3.4M | face::detect | demo | ✅ | ✅ | / | / | ✅ | ✔️ | ✔️ | ❔ |
YoloV5_V_6_1 | 7.5M | detection | demo | ✅ | ✅ | / | / | ✅ | ✔️ | ✔️ | ❔ |
HeadSeg | 31M | segmentation | demo | ✅ | ✅ | / | ✅ | ✅ | ✔️ | ✔️ | ❔ |
FemalePhoto2Cartoon | 15M | style | demo | ✅ | ✅ | / | ✅ | ✅ | ✔️ | ✔️ | ❔ |
FastPortraitSeg | 400k | segmentation | demo | ✅ | ✅ | / | / | ✅ | ✔️ | ✔️ | ❔ |
PortraitSegSINet | 380k | segmentation | demo | ✅ | ✅ | / | / | ✅ | ✔️ | ✔️ | ❔ |
PortraitSegExtremeC3Net | 180k | segmentation | demo | ✅ | ✅ | / | / | ✅ | ✔️ | ✔️ | ❔ |
FaceHairSeg | 18M | segmentation | demo | ✅ | ✅ | / | / | ✅ | ✔️ | ✔️ | ❔ |
HairSeg | 18M | segmentation | demo | ✅ | ✅ | / | / | ✅ | ✔️ | ✔️ | ❔ |
MobileHumanMatting | 3M | matting | demo | ✅ | ✅ | / | / | ✅ | ✔️ | ✔️ | ❔ |
MobileHairSeg | 14M | segmentation | demo | ✅ | ✅ | / | / | ✅ | ✔️ | ✔️ | ❔ |
YOLOv6 | 17M | detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ | ✔️ | ✔️ | ❔ |
- MacOS: 从Lite.Ai.ToolKit 源码编译MacOS下的动态库。需要注意的是Lite.Ai.ToolKit 使用onnxruntime作为默认的后端,因为onnxruntime支持大部分onnx的原生算子,具有更高的易用性。如何编译Linux和Windows版本?点击
▶️ 查看。
git clone --depth=1 https://github.com/DefTruth/lite.ai.toolkit.git # 最新源码
cd lite.ai.toolkit && sh ./build.sh # 对于MacOS, 你可以直接利用本项目包含的OpenCV, ONNXRuntime, MNN, NCNN and TNN依赖库,无需重新编译
💡️ Linux 和 Windows
- lite.ai.toolkit/opencv2
cp -r you-path-to-downloaded-or-built-opencv/include/opencv4/opencv2 lite.ai.toolkit/opencv2
- lite.ai.toolkit/onnxruntime
cp -r you-path-to-downloaded-or-built-onnxruntime/include/onnxruntime lite.ai.toolkit/onnxruntime
- lite.ai.toolkit/MNN
cp -r you-path-to-downloaded-or-built-MNN/include/MNN lite.ai.toolkit/MNN
- lite.ai.toolkit/ncnn
cp -r you-path-to-downloaded-or-built-ncnn/include/ncnn lite.ai.toolkit/ncnn
- lite.ai.toolkit/tnn
cp -r you-path-to-downloaded-or-built-TNN/include/tnn lite.ai.toolkit/tnn
然后把各个依赖库拷贝到lite.ai.toolkit/lib/(linux|windows) 文件夹。 请参考依赖库的编译文档1。
- lite.ai.toolkit/lib/(linux|windows)
cp you-path-to-downloaded-or-built-opencv/lib/*opencv* lite.ai.toolkit/lib/(linux|windows)/ cp you-path-to-downloaded-or-built-onnxruntime/lib/*onnxruntime* lite.ai.toolkit/lib/(linux|windows)/ cp you-path-to-downloaded-or-built-MNN/lib/*MNN* lite.ai.toolkit/lib/(linux|windows)/ cp you-path-to-downloaded-or-built-ncnn/lib/*ncnn* lite.ai.toolkit/lib/(linux|windows)/ cp you-path-to-downloaded-or-built-TNN/lib/*TNN* lite.ai.toolkit/lib/(linux|windows)/
注意,你还需要安装ffmpeg(<=4.2.2),因为opencv的videoio模块依赖ffmpeg进行mp4的编解码。参考issue#203 . 在MacOS下,ffmpeg4.2.2已经作为一个自定义依赖库被我打包进lite.ai.toolkit,不需要再从HomeBrew安装为系统库,因此lite.ai.toolkit是单体的,你可以把它移植到app里面,不用心另一台运行app的机器没有ffmpeg,MacOS版本的lite.ai.toolkit已经包含ffmpeg. 在Windows下,opencv官方团队已经提供了用于opencv的ffmpeg预编译库。在Linux下编译opencv时,请确保-DWITH_FFMPEG=ON,并检查是否链接到ffmpeg.
- 先编译ffmpeg,注意必须是低版本的,高于4.4的,opencv会不兼容。
git clone --depth=1 https://git.ffmpeg.org/ffmpeg.git -b n4.2.2
cd ffmpeg
./configure --enable-shared --disable-x86asm --prefix=/usr/local/opt/ffmpeg --disable-static
make -j8
make install
- 然后,编译带ffmpeg支持的OpenCV,指定-DWITH_FFMPEG=ON
#!/bin/bash
mkdir build
cd build
cmake .. \
-D CMAKE_BUILD_TYPE=Release \
-D CMAKE_INSTALL_PREFIX=your-path-to-custom-dir \
-D BUILD_TESTS=OFF \
-D BUILD_PERF_TESTS=OFF \
-D BUILD_opencv_python3=OFF \
-D BUILD_opencv_python2=OFF \
-D BUILD_SHARED_LIBS=ON \
-D BUILD_opencv_apps=OFF \
-D WITH_FFMPEG=ON
make -j8
make install
cd ..
编译完opencv后,你就可以按照上述的步骤,继续编译lite.ai.toolkit.
-
Windows: 你可以参考issue#6 ,讨论了常见的编译问题。
-
Linux: 参考MacOS下的编译,替换Linux版本的依赖库即可。Linux下的发行版本将会在近期添加 ~ issue#2
-
令人开心的消息!!! : 🚀 你可以直接下载最新的ONNXRuntime官方构建的动态库,包含Windows, Linux, MacOS and Arm的版本!!! CPU和GPU的版本均可获得。不需要再从源码进行编译了,nice。可以从v1.8.1 下载最新的动态库. 我目前在Lite.Ai.ToolKit中用的是1.7.0,你可以从v1.7.0 下载, 但1.8.1应该也是可行的。对于OpenCV,请尝试从源码构建(Linux) 或者 直接从OpenCV 4.5.3 下载官方编译好的动态库(Windows). 然后把头文件和依赖库放入上述的文件夹中.
-
Windows GPU 兼容性: 详见issue#10.
-
Linux GPU 兼容性: 详见issue#97.
🔑️ 如何链接Lite.Ai.ToolKit动态库?
- 你可参考以下的CMakeLists.txt设置来链接动态库.
cmake_minimum_required(VERSION 3.17)
project(lite.ai.toolkit.demo)
set(CMAKE_CXX_STANDARD 11)
# setting up lite.ai.toolkit
set(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)
set(LITE_AI_INCLUDE_DIR ${LITE_AI_DIR}/include)
set(LITE_AI_LIBRARY_DIR ${LITE_AI_DIR}/lib)
include_directories(${LITE_AI_INCLUDE_DIR})
link_directories(${LITE_AI_LIBRARY_DIR})
set(OpenCV_LIBS
opencv_highgui
opencv_core
opencv_imgcodecs
opencv_imgproc
opencv_video
opencv_videoio
)
# add your executable
set(EXECUTABLE_OUTPUT_PATH ${CMAKE_SOURCE_DIR}/examples/build)
add_executable(lite_rvm examples/test_lite_rvm.cpp)
target_link_libraries(lite_rvm
lite.ai.toolkit
onnxruntime
MNN # need, if built lite.ai.toolkit with ENABLE_MNN=ON, default OFF
ncnn # need, if built lite.ai.toolkit with ENABLE_NCNN=ON, default OFF
TNN # need, if built lite.ai.toolkit with ENABLE_TNN=ON, default OFF
${OpenCV_LIBS}) # link lite.ai.toolkit & other libs.
cd ./build/lite.ai.toolkit/lib && otool -L liblite.ai.toolkit.0.0.1.dylib
liblite.ai.toolkit.0.0.1.dylib:
@rpath/liblite.ai.toolkit.0.0.1.dylib (compatibility version 0.0.1, current version 0.0.1)
@rpath/libopencv_highgui.4.5.dylib (compatibility version 4.5.0, current version 4.5.2)
@rpath/libonnxruntime.1.7.0.dylib (compatibility version 0.0.0, current version 1.7.0)
...
cd ../ && tree .
├── bin
├── include
│ ├── lite
│ │ ├── backend.h
│ │ ├── config.h
│ │ └── lite.h
│ └── ort
└── lib
└── liblite.ai.toolkit.0.0.1.dylib
- 运行已经编译好的examples:
cd ./build/lite.ai.toolkit/bin && ls -lh | grep lite
-rwxr-xr-x 1 root staff 301K Jun 26 23:10 liblite.ai.toolkit.0.0.1.dylib
...
-rwxr-xr-x 1 root staff 196K Jun 26 23:10 lite_yolov4
-rwxr-xr-x 1 root staff 196K Jun 26 23:10 lite_yolov5
...
./lite_yolov5
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/yolov5s.onnx
=============== Input-Dims ==============
...
detected num_anchors: 25200
generate_bboxes num: 66
Default Version Detected Boxes Num: 5
为了链接lite.ai.toolkit
动态库,你需要确保OpenCV
and onnxruntime
也被正确地链接。你可以在CMakeLists.txt 中找到一个简单且完整的,关于如何正确地链接Lite.AI.ToolKit动态库的应用案例。
Lite.Ai.ToolKit 目前包括将近 100+ 流行的开源模型以及 500+ 文件,大部分文件是我自己转换的。你可以通过lite::cv::Type::Class 语法进行调用,如 lite::cv::detection::YoloV5。更多的细节见Examples for Lite.Ai.ToolKit。注意,由于Google Driver(15G)的存储限制,我无法上传所有的模型文件,国内的小伙伴请使用百度云盘。
File | Baidu Drive | Google Drive | Docker Hub | Hub (Docs) |
---|---|---|---|---|
ONNX | Baidu Drive code: 8gin | Google Drive | ONNX Docker v0.1.22.01.08 (28G), v0.1.22.02.02 (400M) | ONNX Hub |
MNN | Baidu Drive code: 9v63 | ❔ | MNN Docker v0.1.22.01.08 (11G), v0.1.22.02.02 (213M) | MNN Hub |
NCNN | Baidu Drive code: sc7f | ❔ | NCNN Docker v0.1.22.01.08 (9G), v0.1.22.02.02 (197M) | NCNN Hub |
TNN | Baidu Drive code: 6o6k | ❔ | TNN Docker v0.1.22.01.08 (11G), v0.1.22.02.02 (217M) | TNN Hub |
docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.01.08 # (28G)
docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08 # (11G)
docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.01.08 # (9G)
docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.01.08 # (11G)
docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.02.02 # (400M) + YOLO5Face
docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.02.02 # (213M) + YOLO5Face
docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.02.02 # (197M) + YOLO5Face
docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.02.02 # (217M) + YOLO5Face
❇️ 命名空间和Lite.Ai.ToolKit算法模块的对应关系
Namespace | Details |
---|---|
lite::cv::detection | Object Detection. one-stage and anchor-free detectors, YoloV5, YoloV4, SSD, etc. ✅ |
lite::cv::classification | Image Classification. DensNet, ShuffleNet, ResNet, IBNNet, GhostNet, etc. ✅ |
lite::cv::faceid | Face Recognition. ArcFace, CosFace, CurricularFace, etc. ❇️ |
lite::cv::face | Face Analysis. detect, align, pose, attr, etc. ❇️ |
lite::cv::face::detect | Face Detection. UltraFace, RetinaFace, FaceBoxes, PyramidBox, etc. ❇️ |
lite::cv::face::align | Face Alignment. PFLD(106), FaceLandmark1000(1000 landmarks), PRNet, etc. ❇️ |
lite::cv::face::align3d | 3D Face Alignment. FaceMesh(468 3D landmarks), IrisLandmark(71+5 3D landmarks), etc. ❇️ |
lite::cv::face::pose | Head Pose Estimation. FSANet, etc. ❇️ |
lite::cv::face::attr | Face Attributes. Emotion, Age, Gender. EmotionFerPlus, VGG16Age, etc. ❇️ |
lite::cv::segmentation | Object Segmentation. Such as FCN, DeepLabV3, etc. ❇️ ️ |
lite::cv::style | Style Transfer. Contains neural style transfer now, such as FastStyleTransfer. |
lite::cv::matting | Image Matting. Object and Human matting. ❇️ ️ |
lite::cv::colorization | Colorization. Make Gray image become RGB. |
lite::cv::resolution | Super Resolution. |
Lite.AI.ToolKit的类与权重文件对应关系说明,可以在lite.ai.toolkit.hub.onnx.md 中找到。比如, lite::cv::detection::YoloV5 和 lite::cv::detection::YoloX 的权重文件为:
Class | Pretrained ONNX Files | Rename or Converted From (Repo) | Size |
---|---|---|---|
lite::cv::detection::YoloV5 | yolov5l.onnx | yolov5 (🔥🔥💥↑) | 188Mb |
lite::cv::detection::YoloV5 | yolov5m.onnx | yolov5 (🔥🔥💥↑) | 85Mb |
lite::cv::detection::YoloV5 | yolov5s.onnx | yolov5 (🔥🔥💥↑) | 29Mb |
lite::cv::detection::YoloV5 | yolov5x.onnx | yolov5 (🔥🔥💥↑) | 351Mb |
lite::cv::detection::YoloX | yolox_x.onnx | YOLOX (🔥🔥!!↑) | 378Mb |
lite::cv::detection::YoloX | yolox_l.onnx | YOLOX (🔥🔥!!↑) | 207Mb |
lite::cv::detection::YoloX | yolox_m.onnx | YOLOX (🔥🔥!!↑) | 97Mb |
lite::cv::detection::YoloX | yolox_s.onnx | YOLOX (🔥🔥!!↑) | 34Mb |
lite::cv::detection::YoloX | yolox_tiny.onnx | YOLOX (🔥🔥!!↑) | 19Mb |
lite::cv::detection::YoloX | yolox_nano.onnx | YOLOX (🔥🔥!!↑) | 3.5Mb |
这意味着,你可以通过Lite.Ai.ToolKit中的同一个类,根据你的使用情况,加载任意一个yolov5*.onnx
或yolox_*.onnx
,如 YoloV5, YoloX等.
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5x.onnx"); // for server
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5l.onnx");
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5m.onnx");
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5s.onnx"); // for mobile device
auto *yolox = new lite::cv::detection::YoloX("yolox_x.onnx");
auto *yolox = new lite::cv::detection::YoloX("yolox_l.onnx");
auto *yolox = new lite::cv::detection::YoloX("yolox_m.onnx");
auto *yolox = new lite::cv::detection::YoloX("yolox_s.onnx");
auto *yolox = new lite::cv::detection::YoloX("yolox_tiny.onnx");
auto *yolox = new lite::cv::detection::YoloX("yolox_nano.onnx"); // 3.5Mb only !
🔑️ 如何从通过Docker Hub下载Model Zoo?
- Firstly, pull the image from docker hub.
docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08 # (11G) docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.01.08 # (9G) docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.01.08 # (11G) docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.01.08 # (28G)
- Secondly, run the container with local
share
dir usingdocker run -idt xxx
. A minimum example will show you as follows.- make a
share
dir in your local device.
mkdir share # any name is ok.
- write
run_mnn_docker_hub.sh
script like:
#!/bin/bash PORT1=6072 PORT2=6084 SERVICE_DIR=/Users/xxx/Desktop/your-path-to/share CONRAINER_DIR=/home/hub/share CONRAINER_NAME=mnn_docker_hub_d docker run -idt -p ${PORT2}:${PORT1} -v ${SERVICE_DIR}:${CONRAINER_DIR} --shm-size=16gb --name ${CONRAINER_NAME} qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08
- make a
- Finally, copy the model weights from
/home/hub/mnn/cv
to your localshare
dir.# activate mnn docker. sh ./run_mnn_docker_hub.sh docker exec -it mnn_docker_hub_d /bin/bash # copy the models to the share dir. cd /home/hub cp -rf mnn/cv share/
lite.ai.toolkit提供大量的预训练模型的ONNX文件. 同时, 更多的模型权重文件详见 Model Zoo and ONNX Hub, MNN Hub, TNN Hub, NCNN Hub.
更多的应用案例详见examples 。点击
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";
auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
std::vector<lite::types::Boxf> detected_boxes;
cv::Mat img_bgr = cv::imread(test_img_path);
yolov5->detect(img_bgr, detected_boxes);
lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
cv::imwrite(save_img_path, img_bgr);
delete yolov5;
}
输出的结果是:
或者你可以使用最新的 🔥🔥 ! YOLO 系列检测器YOLOX 或 YoloR ,它们会获得接近的结果。
更多可用的通用目标检测器(80类、COCO):
auto *detector = new lite::cv::detection::YoloX(onnx_path); // Newest YOLO detector !!! 2021-07
auto *detector = new lite::cv::detection::YoloV4(onnx_path);
auto *detector = new lite::cv::detection::YoloV3(onnx_path);
auto *detector = new lite::cv::detection::TinyYoloV3(onnx_path);
auto *detector = new lite::cv::detection::SSD(onnx_path);
auto *detector = new lite::cv::detection::YoloV5(onnx_path);
auto *detector = new lite::cv::detection::YoloR(onnx_path); // Newest YOLO detector !!! 2021-05
auto *detector = new lite::cv::detection::TinyYoloV4VOC(onnx_path);
auto *detector = new lite::cv::detection::TinyYoloV4COCO(onnx_path);
auto *detector = new lite::cv::detection::ScaledYoloV4(onnx_path);
auto *detector = new lite::cv::detection::EfficientDet(onnx_path);
auto *detector = new lite::cv::detection::EfficientDetD7(onnx_path);
auto *detector = new lite::cv::detection::EfficientDetD8(onnx_path);
auto *detector = new lite::cv::detection::YOLOP(onnx_path);
auto *detector = new lite::cv::detection::NanoDet(onnx_path); // Super fast and tiny!
auto *detector = new lite::cv::detection::NanoDetPlus(onnx_path); // Super fast and tiny! 2021/12/25
auto *detector = new lite::cv::detection::NanoDetEfficientNetLite(onnx_path); // Super fast and tiny!
auto *detector = new lite::cv::detection::YoloV5_V_6_0(onnx_path);
auto *detector = new lite::cv::detection::YoloV5_V_6_1(onnx_path);
auto *detector = new lite::cv::detection::YoloX_V_0_1_1(onnx_path); // Newest YOLO detector !!! 2021-07
auto *detector = new lite::cv::detection::YOLOv6(onnx_path); // Newest 2022 YOLO detector !!!
案例1: 使用RobustVideoMatting2021🔥🔥🔥 进行视频抠图。请从Model-Zoo2 下载模型文件。
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
std::string output_path = "../../../logs/test_lite_rvm_0.mp4";
std::string background_path = "../../../examples/lite/resources/test_lite_matting_bgr.jpg";
auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
std::vector<lite::types::MattingContent> contents;
// 1. video matting.
cv::Mat background = cv::imread(background_path);
rvm->detect_video(video_path, output_path, contents, false, 0.4f,
20, true, true, background);
delete rvm;
}
输出的结果是:
更多可用的抠图模型(图片抠图、视频抠图、trimap/mask-free、trimap/mask-based):
auto *matting = new lite::cv::matting::RobustVideoMatting:(onnx_path); // WACV 2022.
auto *matting = new lite::cv::matting::MGMatting(onnx_path); // CVPR 2021
auto *matting = new lite::cv::matting::MODNet(onnx_path); // AAAI 2022
auto *matting = new lite::cv::matting::MODNetDyn(onnx_path); // AAAI 2022 Dynamic Shape Inference.
auto *matting = new lite::cv::matting::BackgroundMattingV2(onnx_path); // CVPR 2020
auto *matting = new lite::cv::matting::BackgroundMattingV2Dyn(onnx_path); // CVPR 2020 Dynamic Shape Inference.
auto *matting = new lite::cv::matting::MobileHumanMatting(onnx_path); // 3Mb only !!!
案例2: 使用FaceLandmarks1000 进行人脸1000关键点检测。请从Model-Zoo2 下载模型文件。
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);
lite::types::Landmarks landmarks;
cv::Mat img_bgr = cv::imread(test_img_path);
face_landmarks_1000->detect(img_bgr, landmarks);
lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
cv::imwrite(save_img_path, img_bgr);
delete face_landmarks_1000;
}
输出的结果是:
更多可用的人脸关键点检测器(68点、98点、106点、1000点):
auto *align = new lite::cv::face::align::PFLD(onnx_path); // 106 landmarks, 1.0Mb only!
auto *align = new lite::cv::face::align::PFLD98(onnx_path); // 98 landmarks, 4.8Mb only!
auto *align = new lite::cv::face::align::PFLD68(onnx_path); // 68 landmarks, 2.8Mb only!
auto *align = new lite::cv::face::align::MobileNetV268(onnx_path); // 68 landmarks, 9.4Mb only!
auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path); // 68 landmarks, 11Mb only!
auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path); // 1000 landmarks, 2.0Mb only!
auto *align = new lite::cv::face::align::PIPNet98(onnx_path); // 98 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet68(onnx_path); // 68 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet29(onnx_path); // 29 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet19(onnx_path); // 19 landmarks, CVPR2021!
案例3: 使用colorization 进行图像着色。请从Model-Zoo2 下载模型文件。
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
cv::Mat img_bgr = cv::imread(test_img_path);
lite::types::ColorizeContent colorize_content;
colorizer->detect(img_bgr, colorize_content);
if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
delete colorizer;
}
输出的结果是:
更多可用的着色器模型(灰度图转彩色图):
auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";
std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";
auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);
lite::types::FaceContent face_content0, face_content1, face_content2;
cv::Mat img_bgr0 = cv::imread(test_img_path0);
cv::Mat img_bgr1 = cv::imread(test_img_path1);
cv::Mat img_bgr2 = cv::imread(test_img_path2);
glint_arcface->detect(img_bgr0, face_content0);
glint_arcface->detect(img_bgr1, face_content1);
glint_arcface->detect(img_bgr2, face_content2);
if (face_content0.flag && face_content1.flag && face_content2.flag)
{
float sim01 = lite::utils::math::cosine_similarity<float>(
face_content0.embedding, face_content1.embedding);
float sim02 = lite::utils::math::cosine_similarity<float>(
face_content0.embedding, face_content2.embedding);
std::cout << "Detected Sim01: " << sim << " Sim02: " << sim02 << std::endl;
}
delete glint_arcface;
}
输出的结果是:
Detected Sim01: 0.721159 Sim02: -0.0626267
更多可用的人脸识别模型(人脸特征提取):
auto *recognition = new lite::cv::faceid::GlintCosFace(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintArcFace(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintPartialFC(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::FaceNet(onnx_path);
auto *recognition = new lite::cv::faceid::FocalArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::FocalAsiaArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::TencentCurricularFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::TencentCifpFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::CenterLossFace(onnx_path);
auto *recognition = new lite::cv::faceid::SphereFace(onnx_path);
auto *recognition = new lite::cv::faceid::PoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::NaivePoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileFaceNet(onnx_path); // 3.8Mb only !
auto *recognition = new lite::cv::faceid::CavaGhostArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::CavaCombinedFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileSEFocalFace(onnx_path); // 4.5Mb only !
案例5: 使用SCRFD 2021 进行人脸检测。请从Model-Zoo2 下载模型文件。
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/scrfd_2.5g_bnkps_shape640x640.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_face_detector.jpg";
std::string save_img_path = "../../../logs/test_lite_scrfd.jpg";
auto *scrfd = new lite::cv::face::detect::SCRFD(onnx_path);
std::vector<lite::types::BoxfWithLandmarks> detected_boxes;
cv::Mat img_bgr = cv::imread(test_img_path);
scrfd->detect(img_bgr, detected_boxes);
lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes);
cv::imwrite(save_img_path, img_bgr);
std::cout << "Default Version Done! Detected Face Num: " << detected_boxes.size() << std::endl;
delete scrfd;
}
输出的结果是:
更多可用的人脸检测器(轻量级人脸检测器):
auto *detector = new lite::face::detect::UltraFace(onnx_path); // 1.1Mb only !
auto *detector = new lite::face::detect::FaceBoxes(onnx_path); // 3.8Mb only !
auto *detector = new lite::face::detect::FaceBoxesv2(onnx_path); // 4.0Mb only !
auto *detector = new lite::face::detect::RetinaFace(onnx_path); // 1.6Mb only ! CVPR2020
auto *detector = new lite::face::detect::SCRFD(onnx_path); // 2.5Mb only ! CVPR2021, Super fast and accurate!!
auto *detector = new lite::face::detect::YOLO5Face(onnx_path); // 2021, Super fast and accurate!!
auto *detector = new lite::face::detect::YOLOv5BlazeFace(onnx_path); // 2021, Super fast and accurate!!
案例6: 使用 DeepLabV3ResNet101 进行语义分割. 请从Model-Zoo2 下载模型文件。
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/deeplabv3_resnet101_coco.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_deeplabv3_resnet101.png";
std::string save_img_path = "../../../logs/test_lite_deeplabv3_resnet101.jpg";
auto *deeplabv3_resnet101 = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path, 16); // 16 threads
lite::types::SegmentContent content;
cv::Mat img_bgr = cv::imread(test_img_path);
deeplabv3_resnet101->detect(img_bgr, content);
if (content.flag)
{
cv::Mat out_img;
cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img);
cv::imwrite(save_img_path, out_img);
if (!content.names_map.empty())
{
for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it)
{
std::cout << it->first << " Name: " << it->second << std::endl;
}
}
}
delete deeplabv3_resnet101;
}
输出的结果是:
更多可用的语义分割模型(人像分割、实例分割):
auto *segment = new lite::cv::segmentation::FCNResNet101(onnx_path);
auto *segment = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path);
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/ssrnet.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_ssrnet.jpg";
std::string save_img_path = "../../../logs/test_lite_ssrnet.jpg";
auto *ssrnet = new lite::cv::face::attr::SSRNet(onnx_path);
lite::types::Age age;
cv::Mat img_bgr = cv::imread(test_img_path);
ssrnet->detect(img_bgr, age);
lite::utils::draw_age_inplace(img_bgr, age);
cv::imwrite(save_img_path, img_bgr);
std::cout << "Default Version Done! Detected SSRNet Age: " << age.age << std::endl;
delete ssrnet;
}
输出的结果是:
更多可用的人脸属性识别模型(性别、年龄、情绪):
auto *attribute = new lite::cv::face::attr::AgeGoogleNet(onnx_path);
auto *attribute = new lite::cv::face::attr::GenderGoogleNet(onnx_path);
auto *attribute = new lite::cv::face::attr::EmotionFerPlus(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Age(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Gender(onnx_path);
auto *attribute = new lite::cv::face::attr::EfficientEmotion7(onnx_path); // 7 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::EfficientEmotion8(onnx_path); // 8 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::MobileEmotion7(onnx_path); // 7 emotions, 13Mb only!
auto *attribute = new lite::cv::face::attr::ReXNetEmotion7(onnx_path); // 7 emotions
auto *attribute = new lite::cv::face::attr::SSRNet(onnx_path); // age estimation, 190kb only!!!
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/densenet121.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_densenet.jpg";
auto *densenet = new lite::cv::classification::DenseNet(onnx_path);
lite::types::ImageNetContent content;
cv::Mat img_bgr = cv::imread(test_img_path);
densenet->detect(img_bgr, content);
if (content.flag)
{
const unsigned int top_k = content.scores.size();
if (top_k > 0)
{
for (unsigned int i = 0; i < top_k; ++i)
std::cout << i + 1
<< ": " << content.labels.at(i)
<< ": " << content.texts.at(i)
<< ": " << content.scores.at(i)
<< std::endl;
}
}
delete densenet;
}
输出的结果是:
更多可用的图像分类模型(1000类):
auto *classifier = new lite::cv::classification::EfficientNetLite4(onnx_path);
auto *classifier = new lite::cv::classification::ShuffleNetV2(onnx_path); // 8.7Mb only!
auto *classifier = new lite::cv::classification::GhostNet(onnx_path);
auto *classifier = new lite::cv::classification::HdrDNet(onnx_path);
auto *classifier = new lite::cv::classification::IBNNet(onnx_path);
auto *classifier = new lite::cv::classification::MobileNetV2(onnx_path); // 13Mb only!
auto *classifier = new lite::cv::classification::ResNet(onnx_path);
auto *classifier = new lite::cv::classification::ResNeXt(onnx_path);
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/fsanet-var.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_fsanet.jpg";
std::string save_img_path = "../../../logs/test_lite_fsanet.jpg";
auto *fsanet = new lite::cv::face::pose::FSANet(onnx_path);
cv::Mat img_bgr = cv::imread(test_img_path);
lite::types::EulerAngles euler_angles;
fsanet->detect(img_bgr, euler_angles);
if (euler_angles.flag)
{
lite::utils::draw_axis_inplace(img_bgr, euler_angles);
cv::imwrite(save_img_path, img_bgr);
std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl;
}
delete fsanet;
}
输出的结果是:
更多可用的头部姿态识别模型(欧拉角、yaw、pitch、roll):
auto *pose = new lite::cv::face::pose::FSANet(onnx_path); // 1.2Mb only!
案例10: 使用 FastStyleTransfer 进行风格迁移. 请从Model-Zoo2 下载模型文件。
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/style-candy-8.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_fast_style_transfer.jpg";
std::string save_img_path = "../../../logs/test_lite_fast_style_transfer_candy.jpg";
auto *fast_style_transfer = new lite::cv::style::FastStyleTransfer(onnx_path);
lite::types::StyleContent style_content;
cv::Mat img_bgr = cv::imread(test_img_path);
fast_style_transfer->detect(img_bgr, style_content);
if (style_content.flag) cv::imwrite(save_img_path, style_content.mat);
delete fast_style_transfer;
}
输出的结果是:
更多可用的风格迁移模型(自然风格迁移、其他):
auto *transfer = new lite::cv::style::FastStyleTransfer(onnx_path); // 6.4Mb only
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/minivision_head_seg.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_head_seg.png";
std::string save_img_path = "../../../logs/test_lite_head_seg.jpg";
auto *head_seg = new lite::cv::segmentation::HeadSeg(onnx_path, 4); // 4 threads
lite::types::HeadSegContent content;
cv::Mat img_bgr = cv::imread(test_img_path);
head_seg->detect(img_bgr, content);
if (content.flag) cv::imwrite(save_img_path, content.mask * 255.f);
delete head_seg;
}
输出的结果是:
更多可用的人像分割模型(头部分割、肖像分割、头发分割)
auto *segment = new lite::cv::segmentation::HeadSeg(onnx_path); // 31Mb
auto *segment = new lite::cv::segmentation::FastPortraitSeg(onnx_path); // <= 400Kb !!!
auto *segment = new lite::cv::segmentation::PortraitSegSINet(onnx_path); // <= 380Kb !!!
auto *segment = new lite::cv::segmentation::PortraitSegExtremeC3Net(onnx_path); // <= 180Kb !!! Extreme Tiny !!!
auto *segment = new lite::cv::segmentation::FaceHairSeg(onnx_path); // 18M
auto *segment = new lite::cv::segmentation::HairSeg(onnx_path); // 18M
auto *segment = new lite::cv::segmentation::MobileHairSeg(onnx_path); // 14M
Example12: 使用 Photo2Cartoon 进行人像卡通风格化。请从Model-Zoo2 下载模型文件。
#include "lite/lite.h"
static void test_default()
{
std::string head_seg_onnx_path = "../../../hub/onnx/cv/minivision_head_seg.onnx";
std::string cartoon_onnx_path = "../../../hub/onnx/cv/minivision_female_photo2cartoon.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_female_photo2cartoon.jpg";
std::string save_mask_path = "../../../logs/test_lite_female_photo2cartoon_seg.jpg";
std::string save_cartoon_path = "../../../logs/test_lite_female_photo2cartoon_cartoon.jpg";
auto *head_seg = new lite::cv::segmentation::HeadSeg(head_seg_onnx_path, 4); // 4 threads
auto *female_photo2cartoon = new lite::cv::style::FemalePhoto2Cartoon(cartoon_onnx_path, 4); // 4 threads
lite::types::HeadSegContent head_seg_content;
cv::Mat img_bgr = cv::imread(test_img_path);
head_seg->detect(img_bgr, head_seg_content);
if (head_seg_content.flag && !head_seg_content.mask.empty())
{
cv::imwrite(save_mask_path, head_seg_content.mask * 255.f);
// Female Photo2Cartoon Style Transfer
lite::types::FemalePhoto2CartoonContent female_cartoon_content;
female_photo2cartoon->detect(img_bgr, head_seg_content.mask, female_cartoon_content);
if (female_cartoon_content.flag && !female_cartoon_content.cartoon.empty())
cv::imwrite(save_cartoon_path, female_cartoon_content.cartoon);
}
delete head_seg;
delete female_photo2cartoon;
}
输出的结果是:
更多的人像风格化模型
auto *transfer = new lite::cv::style::FemalePhoto2Cartoon(onnx_path);
Lite.Ai.ToolKit 的代码采用GPL-3.0协议。
本项目参考了以下开源项目。
- RobustVideoMatting (🔥🔥🔥new!!↑)
- nanodet (🔥🔥🔥↑)
- YOLOX (🔥🔥🔥new!!↑)
- YOLOP (🔥🔥new!!↑)
- YOLOR (🔥🔥new!!↑)
- ScaledYOLOv4 (🔥🔥🔥↑)
- insightface (🔥🔥🔥↑)
- yolov5 (🔥🔥💥↑)
- TFace (🔥🔥↑)
- YOLOv4-pytorch (🔥🔥🔥↑)
- Ultra-Light-Fast-Generic-Face-Detector-1MB (🔥🔥🔥↑)
展开更多引用参考
- headpose-fsanet-pytorch (🔥↑)
- pfld_106_face_landmarks (🔥🔥↑)
- onnx-models (🔥🔥🔥↑)
- SSR_Net_Pytorch (🔥↑)
- colorization (🔥🔥🔥↑)
- SUB_PIXEL_CNN (🔥↑)
- torchvision (🔥🔥🔥↑)
- facenet-pytorch (🔥↑)
- face.evoLVe.PyTorch (🔥🔥🔥↑)
- center-loss.pytorch (🔥🔥↑)
- sphereface_pytorch (🔥🔥↑)
- DREAM (🔥🔥↑)
- MobileFaceNet_Pytorch (🔥🔥↑)
- cavaface.pytorch (🔥🔥↑)
- CurricularFace (🔥🔥↑)
- face-emotion-recognition (🔥↑)
- face_recognition.pytorch (🔥🔥↑)
- PFLD-pytorch (🔥🔥↑)
- pytorch_face_landmark (🔥🔥↑)
- FaceLandmark1000 (🔥🔥↑)
- Pytorch_Retinaface (🔥🔥🔥↑)
- FaceBoxes (🔥🔥↑)
未来会增加一些模型的MNN 、NCNN 和 TNN 支持,但由于算子兼容等原因,也无法确保所有被ONNXRuntime C++ 支持的模型能够在MNN 、NCNN 和 TNN 下跑通。所以,如果您想使用本项目支持的所有模型,并且不在意1~2ms的性能差距的话,请使用ONNXRuntime版本的实现。ONNXRuntime 是本仓库默认的推理引擎。但是如果你确实希望编译支持MNN 、NCNN 和 TNN 支持的Lite.Ai.ToolKit动态库,你可以按照以下的步骤进行设置。
- 在
build.sh
中添加DENABLE_MNN=ON
、DENABLE_NCNN=ON
或DENABLE_TNN=ON
,比如
cd build && cmake \
-DCMAKE_BUILD_TYPE=MinSizeRel \
-DINCLUDE_OPENCV=ON \ # 是否打包OpenCV进lite.ai.toolkit,默认ON;否则,你需要单独设置OpenCV
-DENABLE_MNN=ON \ # 是否编译MNN版本的模型, 默认OFF,目前只支持部分模型
-DENABLE_NCNN=OFF \ # 是否编译NCNN版本的模型,默认OFF,目前只支持部分模型
-DENABLE_TNN=OFF \ # 是否编译TNN版本的模型, 默认OFF,目前只支持部分模型
.. && make -j8
- 使用MNN、NCNN或TNN版本的接口,详见案例demo ,比如
auto *nanodet = new lite::mnn::cv::detection::NanoDet(mnn_path);
auto *nanodet = new lite::tnn::cv::detection::NanoDet(proto_path, model_path);
auto *nanodet = new lite::ncnn::cv::detection::NanoDet(param_path, bin_path);
如何添加您自己的模型以及成为贡献者?具体步骤请参考 CONTRIBUTING.zh.md .
非常感谢以下的贡献者: