Skip to content

Commit

Permalink
fix: update model download instrution
Browse files Browse the repository at this point in the history
Signed-off-by: badai-nguyen <[email protected]>
  • Loading branch information
badai-nguyen committed Jul 16, 2024
1 parent de49241 commit e22939b
Showing 1 changed file with 8 additions and 106 deletions.
114 changes: 8 additions & 106 deletions docs/models/index.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,18 @@
# Machine learning models

The Autoware perception stack uses models for inference. These models are automatically downloaded if using `ansible`, but they can also be downloaded manually.
The Autoware perception stack uses models for inference. These models are automatically downloaded as part of the `setup-dev-env.sh` script.

## ONNX model files
The models are hosted by Web.Auto.

### Download instructions
Default models directory (`data_dir`) is `~/autoware_data`.

The ONNX model files are stored in a common location, hosted by Web.Auto
## Download instructions

Any tool that can download files from the web (e.g. `wget` or `curl`) is the only requirement for downloading these files:
Please follow the download instruction in [autoware download instructions](https://github.com/autowarefoundation/autoware/blob/main/ansible/roles/artifacts/README.md#L15) for updated models downloading.

```console
# yabloc_pose_initializer
The models can be also downloaded manually using download tools such as `wget` or `curl`. The latest urls of weight files and param files for each model can be found at [autoware main.yaml file](https://github.com/autowarefoundation/autoware/blob/main/ansible/roles/artifacts/tasks/main.yaml)

$ mkdir -p ~/autoware_data/yabloc_pose_initializer/
$ wget -P ~/autoware_data/yabloc_pose_initializer/ \
https://s3.ap-northeast-2.wasabisys.com/pinto-model-zoo/136_road-segmentation-adas-0001/resources.tar.gz


# image_projection_based_fusion

$ mkdir -p ~/autoware_data/image_projection_based_fusion/
$ wget -P ~/autoware_data/image_projection_based_fusion/ \
https://awf.ml.dev.web.auto/perception/models/pointpainting/v4/pts_voxel_encoder_pointpainting.onnx \
https://awf.ml.dev.web.auto/perception/models/pointpainting/v4/pts_backbone_neck_head_pointpainting.onnx \
https://awf.ml.dev.web.auto/perception/models/pointpainting/v4/detection_class_remapper.param.yaml \
https://awf.ml.dev.web.auto/perception/models/pointpainting/v4/pointpainting_ml_package.param.yaml


# lidar_apollo_instance_segmentation

$ mkdir -p ~/autoware_data/lidar_apollo_instance_segmentation/
$ wget -P ~/autoware_data/lidar_apollo_instance_segmentation/ \
https://awf.ml.dev.web.auto/perception/models/lidar_apollo_instance_segmentation/vlp-16.onnx \
https://awf.ml.dev.web.auto/perception/models/lidar_apollo_instance_segmentation/hdl-64.onnx \
https://awf.ml.dev.web.auto/perception/models/lidar_apollo_instance_segmentation/vls-128.onnx


# lidar_centerpoint
The example of downloading `lidar_centerpoint` model:

$ mkdir -p ~/autoware_data/lidar_centerpoint/
$ wget -P ~/autoware_data/lidar_centerpoint/ \
Expand All @@ -49,77 +24,4 @@ $ wget -P ~/autoware_data/lidar_centerpoint/ \
https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/centerpoint_tiny_ml_package.param.yaml \
https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/centerpoint_sigma_ml_package.param.yaml \
https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/detection_class_remapper.param.yaml \
https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/deploy_metadata.yaml

# lidar_transfusion

$ mkdir -p ~/autoware_data/lidar_transfusion/
$ wget -P ~/autoware_data/lidar_transfusion/ \
https://awf.ml.dev.web.auto/perception/models/transfusion/t4xx1_90m/v2/transfusion.onnx \
https://awf.ml.dev.web.auto/perception/models/transfusion/t4xx1_90m/v2/transfusion.param.yaml \
https://awf.ml.dev.web.auto/perception/models/transfusion/t4xx1_90m/v2/detection_class_remapper.param.yaml

# tensorrt_yolo

$ mkdir -p ~/autoware_data/tensorrt_yolo/
$ wget -P ~/autoware_data/tensorrt_yolo/ \
https://awf.ml.dev.web.auto/perception/models/yolov3.onnx \
https://awf.ml.dev.web.auto/perception/models/yolov4.onnx \
https://awf.ml.dev.web.auto/perception/models/yolov4-tiny.onnx \
https://awf.ml.dev.web.auto/perception/models/yolov5s.onnx \
https://awf.ml.dev.web.auto/perception/models/yolov5m.onnx \
https://awf.ml.dev.web.auto/perception/models/yolov5l.onnx \
https://awf.ml.dev.web.auto/perception/models/yolov5x.onnx \
https://awf.ml.dev.web.auto/perception/models/coco.names


# tensorrt_yolox

$ mkdir -p ~/autoware_data/tensorrt_yolox/
$ wget -P ~/autoware_data/tensorrt_yolox/ \
https://awf.ml.dev.web.auto/perception/models/yolox-tiny.onnx \
https://awf.ml.dev.web.auto/perception/models/yolox-sPlus-opt.onnx \
https://awf.ml.dev.web.auto/perception/models/yolox-sPlus-opt.EntropyV2-calibration.table \
https://awf.ml.dev.web.auto/perception/models/object_detection_yolox_s/v1/yolox-sPlus-T4-960x960-pseudo-finetune.onnx \
https://awf.ml.dev.web.auto/perception/models/object_detection_yolox_s/v1/yolox-sPlus-T4-960x960-pseudo-finetune.EntropyV2-calibration.table \
https://awf.ml.dev.web.auto/perception/models/object_detection_semseg_yolox_s/v1/yolox-sPlus-opt-pseudoV2-T4-960x960-T4-seg16cls.onnx \
https://awf.ml.dev.web.auto/perception/models/object_detection_semseg_yolox_s/v1/yolox-sPlus-opt-pseudoV2-T4-960x960-T4-seg16cls.EntropyV2-calibration.table \
https://awf.ml.dev.web.auto/perception/models/object_detection_semseg_yolox_s/v1/semseg_color_map.csv \
https://awf.ml.dev.web.auto/perception/models/label.txt



# traffic_light_classifier

$ mkdir -p ~/autoware_data/traffic_light_classifier/
$ wget -P ~/autoware_data/traffic_light_classifier/ \
https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/traffic_light_classifier_mobilenetv2_batch_1.onnx \
https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/traffic_light_classifier_mobilenetv2_batch_4.onnx \
https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/traffic_light_classifier_mobilenetv2_batch_6.onnx \
https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/traffic_light_classifier_efficientNet_b1_batch_1.onnx \
https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/traffic_light_classifier_efficientNet_b1_batch_4.onnx \
https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/traffic_light_classifier_efficientNet_b1_batch_6.onnx \
https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v3/ped_traffic_light_classifier_mobilenetv2_batch_1.onnx \
https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v3/ped_traffic_light_classifier_mobilenetv2_batch_4.onnx \
https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v3/ped_traffic_light_classifier_mobilenetv2_batch_6.onnx \
https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v3/lamp_labels_ped.txt \
https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/lamp_labels.txt


# traffic_light_fine_detector

$ mkdir -p ~/autoware_data/traffic_light_fine_detector/
$ wget -P ~/autoware_data/traffic_light_fine_detector/ \
https://awf.ml.dev.web.auto/perception/models/tlr_yolox_s/v3/tlr_car_ped_yolox_s_batch_1.onnx \
https://awf.ml.dev.web.auto/perception/models/tlr_yolox_s/v3/tlr_car_ped_yolox_s_batch_4.onnx \
https://awf.ml.dev.web.auto/perception/models/tlr_yolox_s/v3/tlr_car_ped_yolox_s_batch_6.onnx \
https://awf.ml.dev.web.auto/perception/models/tlr_yolox_s/v3/tlr_labels.txt

# tvm_utility

$ mkdir -p ~/autoware_data/tvm_utility/models/
$ wget -p ~/autoware_data/tvm_utility/ \
https://autoware-modelzoo.s3.us-east-2.amazonaws.com/models/3.0.0-20221221/yolo_v2_tiny-x86_64-llvm-3.0.0-20221221.tar.gz
$ tar -xvzf ~/autoware_data/tvm_utility/yolo_v2_tiny-x86_64-llvm-3.0.0-20221221.tar.gz -C ~/autoware_data/tvm_utility/models/

```
https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/deploy_metadata.yaml

0 comments on commit e22939b

Please sign in to comment.