diff --git a/docs/support_list/model_list_gcu.en.md b/docs/support_list/model_list_gcu.en.md index f7f8e364b..f4d44c651 100644 --- a/docs/support_list/model_list_gcu.en.md +++ b/docs/support_list/model_list_gcu.en.md @@ -6,7 +6,7 @@ comments: true PaddleX incorporates multiple pipelines, each containing several modules, and each module encompasses various models. You can select the appropriate models based on the benchmark data below. If you prioritize model accuracy, choose models with higher accuracy. If you prioritize model size, select models with smaller storage requirements. -## Image Classification Module +## [Image Classification Module](../module_usage/tutorials/cv_modules/image_classification.en.md) @@ -17,15 +17,360 @@ PaddleX incorporates multiple pipelines, each containing several modules, and ea + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ConvNeXt_base_22483.84313.9 MInference Model/Trained Model
ConvNeXt_base_38484.90313.9 MInference Model/Trained Model
ConvNeXt_large_22484.26700.7 MInference Model/Trained Model
ConvNeXt_large_38485.27700.7 MInference Model/Trained Model
ConvNeXt_small83.13178.0 MInference Model/Trained Model
ConvNeXt_tiny82.03101.4 MInference Model/Trained Model
FasterNet-L83.5357.1 MInference Model/Trained Model
FasterNet-M82.9204.6 MInference Model/Trained Model
FasterNet-S81.3119.3 MInference Model/Trained Model
FasterNet-T071.815.1 MInference Model/Trained Model
FasterNet-T176.229.2 MInference Model/Trained Model
FasterNet-T278.857.4 MInference Model/Trained Model
MobileNetV1_x0_2551.41.8 MInference Model/Trained Model
MobileNetV1_x0_563.54.8 MInference Model/Trained Model
MobileNetV1_x0_7568.89.3 MInference Model/Trained Model
MobileNetV1_x1_071.015.2 MInference Model/Trained Model
MobileNetV2_x0_2553.25.5 MInference Model/Trained Model
MobileNetV2_x0_565.07.1 MInference Model/Trained Model
MobileNetV2_x1_072.212.6 MInference Model/Trained Model
MobileNetV2_x1_574.125.0 MInference Model/Trained Model
MobileNetV2_x2_075.241.2 MInference Model/Trained Model
MobileNetV3_large_x0_3564.37.5 MInference Model/Trained Model
MobileNetV3_large_x0_569.29.6 MInference Model/Trained Model
MobileNetV3_large_x0_7573.114.0 MInference Model/Trained Model
MobileNetV3_large_x1_075.319.5 MInference Model/Trained Model
MobileNetV3_large_x1_2576.426.5 MInference Model/Trained Model
MobileNetV3_small_x0_3553.06.0 MInference Model/Trained Model
MobileNetV3_small_x0_559.26.8 MInference Model/Trained Model
MobileNetV3_small_x0_7566.08.5 MInference Model/Trained Model
MobileNetV3_small_x1_068.210.5 MInference Model/Trained Model
MobileNetV3_small_x1_2570.713.0 MInference Model/Trained Model
MobileNetV4_conv_large83.4125.2 MInference Model/Trained Model
MobileNetV4_conv_medium80.937.6 MInference Model/Trained Model
MobileNetV4_conv_small74.414.7 MInference Model/Trained Model
PP-HGNet_base85.0249.4 MInference Model/Trained Model
PP-HGNet_small81.5186.5 MInference Model/Trained Model
PP-HGNet_tiny79.8352.4 MInference Model/Trained Model
PP-HGNetV2-B077.7721.4 MInference Model/Trained Model
PP-HGNetV2-B178.9022.6 MInference Model/Trained Model
PP-HGNetV2-B281.5739.9 MInference Model/Trained Model
PP-HGNetV2-B382.9257.9 MInference Model/Trained Model
PP-HGNetV2-B483.6870.4 MInference Model/Trained Model
PP-HGNetV2-B584.75140.8 MInference Model/Trained Model
PP-HGNetV2-B686.20268.4 MInference Model/Trained Model
PP-LCNet_x0_2551.865.5 MInference Model/Trained Model
PP-LCNet_x0_3558.105.9 MInference Model/Trained Model
PP-LCNet_x0_563.146.7 MInference Model/Trained Model
PP-LCNet_x0_7568.188.4 MInference Model/Trained Model
PP-LCNet_x1_071.3210.5 MInference Model/Trained Model
PP-LCNet_x1_573.7116.0 MInference Model/Trained Model
PP-LCNet_x2_075.1823.2 MInference Model/Trained Model
PP-LCNet_x2_576.6032.1 MInference Model/Trained Model
PP-LCNetV2_base77.0423.7 MInference Model/Trained Model
PP-LCNetV2_large78.5137.3 MInference Model/Trained Model
PP-LCNetV2_small73.9614.6 MInference Model/Trained Model
ResNet18_vd72.341.5 MInference Model/Trained Model
ResNet1871.041.5 MInference Model/Trained Model
ResNet34_vd76.077.3 MInference Model/Trained Model
ResNet3474.677.3 MInference Model/Trained Model
ResNet50_vd79.190.8 MInference Model/Trained Model
ResNet5076.9676.5 90.8 M Inference Model/Trained Model
ResNet101_vd80.2158.4 MInference Model/Trained Model
ResNet10177.6158.7 MInference Model/Trained Model
ResNet152_vd80.6214.3 MInference Model/Trained Model
ResNet15278.3214.2 MInference Model/Trained Model
ResNet200_vd80.7266.0 MInference Model/Trained Model
StarNet-S173.511.2 MInference Model/Trained Model
StarNet-S274.714.3 MInference Model/Trained Model
StarNet-S377.422.2 MInference Model/Trained Model
StarNet-S478.828.9 MInference Model/Trained Model
Note: The above accuracy metrics refer to Top-1 Accuracy on the [ImageNet-1k](https://www.image-net.org/index.php) validation set. -## Object Detection Module +## [Object Detection Module](../module_usage/tutorials/cv_modules/object_detection.en.md) @@ -36,6 +381,31 @@ PaddleX incorporates multiple pipelines, each containing several modules, and ea + + + + + + + + + + + + + + + + + + + + + + + + + @@ -84,7 +454,31 @@ PaddleX incorporates multiple pipelines, each containing several modules, and ea
FCOS-ResNet5039.6124.2 MInference Model/Trained Model
PicoDet-L42.520.9 MInference Model/Trained Model
PicoDet-M37.416.8 MInference Model/Trained Model
PicoDet-S29.04.4 MInference Model/Trained Model
PicoDet-XS26.25.7MInference Model/Trained Model
PP-YOLOE_plus-L 52.8 185.3 M
Note: The above accuracy metrics are for [COCO2017](https://cocodataset.org/#home) validation set mAP(0.5:0.95). -## Text Detection Module +## [Pedestrian Detection Module](../module_usage/tutorials/cv_modules/human_detection.en.md) + + + + + + + + + + + + + + + + + + + + +
Model NamemAP(%)Model Size (M)Model Download Link
PP-YOLOE-L_human48.0196.1 MInference Model/Trained Model
PP-YOLOE-S_human42.528.8 MInference Model/Trained Model
+Note: The above accuracy metrics are mAP(0.5:0.95) on the [CrowdHuman](https://bj.bcebos.com/v1/paddledet/data/crowdhuman.zip) validation set. + +## [Text Detection Module](../module_usage/tutorials/ocr_modules/text_detection.en.md) @@ -108,7 +502,7 @@ PaddleX incorporates multiple pipelines, each containing several modules, and ea
Note: The above accuracy metrics are evaluated on PaddleOCR's self-built Chinese dataset, covering street scenes, web images, documents, and handwritten scenarios, with 500 images for detection. -## Text Recognition Module +## [Text Recognition Module](../module_usage/tutorials/ocr_modules/text_recognition.en.md) diff --git a/docs/support_list/model_list_gcu.md b/docs/support_list/model_list_gcu.md index ca75a698d..fcb5548a6 100644 --- a/docs/support_list/model_list_gcu.md +++ b/docs/support_list/model_list_gcu.md @@ -6,36 +6,406 @@ comments: true PaddleX 内置了多条产线,每条产线都包含了若干模块,每个模块包含若干模型,具体使用哪些模型,您可以根据下边的 benchmark 数据来选择。如您更考虑模型精度,请选择精度较高的模型,如您更考虑模型存储大小,请选择存储大小较小的模型。 -## 图像分类模块 +## [图像分类模块](../module_usage/tutorials/cv_modules/image_classification.md)
- + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
模型名称 Top1 Acc(%)模型存储大小(M)模型存储大小(M) 模型下载链接
ConvNeXt_base_22483.84313.9 M推理模型/训练模型
ConvNeXt_base_38484.90313.9 M推理模型/训练模型
ConvNeXt_large_22484.26700.7 M推理模型/训练模型
ConvNeXt_large_38485.27700.7 M推理模型/训练模型
ConvNeXt_small83.13178.0 M推理模型/训练模型
ConvNeXt_tiny82.03101.4 M推理模型/训练模型
FasterNet-L83.5357.1 M推理模型/训练模型
FasterNet-M82.9204.6 M推理模型/训练模型
FasterNet-S81.3119.3 M推理模型/训练模型
FasterNet-T071.815.1 M推理模型/训练模型
FasterNet-T176.229.2 M推理模型/训练模型
FasterNet-T278.857.4 M推理模型/训练模型
MobileNetV1_x0_2551.41.8 M推理模型/训练模型
MobileNetV1_x0_563.54.8 M推理模型/训练模型
MobileNetV1_x0_7568.89.3 M推理模型/训练模型
MobileNetV1_x1_071.015.2 M推理模型/训练模型
MobileNetV2_x0_2553.25.5 M推理模型/训练模型
MobileNetV2_x0_565.07.1 M推理模型/训练模型
MobileNetV2_x1_072.212.6 M推理模型/训练模型
MobileNetV2_x1_574.125.0 M推理模型/训练模型
MobileNetV2_x2_075.241.2 M推理模型/训练模型
MobileNetV3_large_x0_3564.37.5 M推理模型/训练模型
MobileNetV3_large_x0_569.29.6 M推理模型/训练模型
MobileNetV3_large_x0_7573.114.0 M推理模型/训练模型
MobileNetV3_large_x1_075.319.5 M推理模型/训练模型
MobileNetV3_large_x1_2576.426.5 M推理模型/训练模型
MobileNetV3_small_x0_3553.06.0 M推理模型/训练模型
MobileNetV3_small_x0_559.26.8 M推理模型/训练模型
MobileNetV3_small_x0_7566.08.5 M推理模型/训练模型
MobileNetV3_small_x1_068.210.5 M推理模型/训练模型
MobileNetV3_small_x1_2570.713.0 M推理模型/训练模型
MobileNetV4_conv_large83.4125.2 M推理模型/训练模型
MobileNetV4_conv_medium80.937.6 M推理模型/训练模型
MobileNetV4_conv_small74.414.7 M推理模型/训练模型
PP-HGNet_base85.0249.4 M推理模型/训练模型
PP-HGNet_small81.5186.5 M推理模型/训练模型
PP-HGNet_tiny79.8352.4 M推理模型/训练模型
PP-HGNetV2-B077.7721.4 M推理模型/训练模型
PP-HGNetV2-B178.9022.6 M推理模型/训练模型
PP-HGNetV2-B281.5739.9 M推理模型/训练模型
PP-HGNetV2-B382.9257.9 M推理模型/训练模型
PP-HGNetV2-B483.6870.4 M推理模型/训练模型
PP-HGNetV2-B584.75140.8 M推理模型/训练模型
PP-HGNetV2-B686.20268.4 M推理模型/训练模型
PP-LCNet_x0_2551.865.5 M推理模型/训练模型
PP-LCNet_x0_3558.105.9 M推理模型/训练模型
PP-LCNet_x0_563.146.7 M推理模型/训练模型
PP-LCNet_x0_7568.188.4 M推理模型/训练模型
PP-LCNet_x1_071.3210.5 M推理模型/训练模型
PP-LCNet_x1_573.7116.0 M推理模型/训练模型
PP-LCNet_x2_075.1823.2 M推理模型/训练模型
PP-LCNet_x2_576.6032.1 M推理模型/训练模型
PP-LCNetV2_base77.0423.7 M推理模型/训练模型
PP-LCNetV2_large78.5137.3 M推理模型/训练模型
PP-LCNetV2_small73.9614.6 M推理模型/训练模型
ResNet18_vd72.341.5 M推理模型/训练模型
ResNet1871.041.5 M推理模型/训练模型
ResNet34_vd76.077.3 M推理模型/训练模型
ResNet3474.677.3 M推理模型/训练模型
ResNet50_vd79.190.8 M推理模型/训练模型
ResNet5076.9676.5 90.8 M 推理模型/训练模型
ResNet101_vd80.2158.4 M推理模型/训练模型
ResNet10177.6158.7 M推理模型/训练模型
ResNet152_vd80.6214.3 M推理模型/训练模型
ResNet15278.3214.2 M推理模型/训练模型
ResNet200_vd80.7266.0 M推理模型/训练模型
StarNet-S173.511.2 M推理模型/训练模型
StarNet-S274.714.3 M推理模型/训练模型
StarNet-S377.422.2 M推理模型/训练模型
StarNet-S478.828.9 M推理模型/训练模型
注:以上精度指标为[ImageNet-1k](https://www.image-net.org/index.php)验证集 Top1 Acc。 -## 目标检测模块 +## [目标检测模块](../module_usage/tutorials/cv_modules/object_detection.md) - + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -84,13 +454,37 @@ PaddleX 内置了多条产线,每条产线都包含了若干模块,每个模
模型名称 mAP(%)模型存储大小(M)模型存储大小(M) 模型下载链接
FCOS-ResNet5039.6124.2 M推理模型/训练模型
PicoDet-L42.520.9 M推理模型/训练模型
PicoDet-M37.416.8 M推理模型/训练模型
PicoDet-S29.04.4 M推理模型/训练模型
PicoDet-XS26.25.7M推理模型/训练模型
PP-YOLOE_plus-L 52.8 185.3 M
注:以上精度指标为[COCO2017](https://cocodataset.org/#home)验证集 mAP(0.5:0.95)。 -## 文本检测模块 +## [行人检测模块](../module_usage/tutorials/cv_modules/human_detection.md) + + + + + + + + + + + + + + + + + + + + +
模型名称mAP(%)模型存储大小模型下载链接
PP-YOLOE-L_human48.0196.1 M推理模型/训练模型
PP-YOLOE-S_human42.528.8 M推理模型/训练模型
+注:以上精度指标为 [CrowdHuman](https://bj.bcebos.com/v1/paddledet/data/crowdhuman.zip) 验证集 mAP(0.5:0.95)。 + +## [文本检测模块](../module_usage/tutorials/ocr_modules/text_detection.md) - + @@ -102,19 +496,19 @@ PaddleX 内置了多条产线,每条产线都包含了若干模块,每个模 - +
模型名称 检测Hmean(%)模型存储大小(M)模型存储大小(M) 模型下载链接
PP-OCRv4_server_det 82.69100.1M100.1 M 推理模型/训练模型
注:以上精度指标的评估集是 PaddleOCR 自建的中文数据集,覆盖街景、网图、文档、手写多个场景,其中检测包含 500 张图片。 -## 文本识别模块 +## [文本识别模块](../module_usage/tutorials/ocr_modules/text_recognition.md) - +
模型名称 识别Avg Accuracy(%)模型存储大小(M)模型存储大小(M) 模型下载链接