Skip to content

Latest commit

 

History

History
262 lines (198 loc) · 10.2 KB

omc.md

File metadata and controls

262 lines (198 loc) · 10.2 KB

OMC tutorial

We assume the root path is $OMC, e.g. /home/chaoliang/MOT

Set up environment

$conda_path denotes your anaconda path, e.g. /home/chaoliang/anaconda3

conda create -n OMC python=3.8
source activate OMC
cd OMC/lib/tutorial/
pip install -r requirements.txt

Testing

Prepare data and models

  1. Download the testing model [[Google Drive]]https://drive.google.com/drive/folders/1lG-bwk22uJjUw5DBy92-h857qMXKJ1Ba?usp=sharing)[[Baidu NetDisk(omct)]](https://pan.baidu.com/s/1mqEFjZJ4Cz00Zy9erl7auw) to $SOTS/model.
  2. Download testing data e.g. MOT-16 and put them in $SOTS/dataset. The dataset can be downloaded from their official webpage.

Inference

In root path $OMC/tracking,

For MOT-16

python test_omc.py --weights ../model/OMC_mot17.pt 
				   --cfg ../experiments/model_set/CSTrack_l.yaml 
				   --name l-mot16-test 
				   --test_mot16 True 
				   --output_root runs/test_w_recheck

For MOT-17

python test_omc.py --weights ../model/OMC_mot17.pt 
				   --cfg ../experiments/model_set/CSTrack_l.yaml 
				   --name l-mot17-test 
				   --test_mot17 True 
				   --output_root runs/test_w_recheck

For MOT-20

python test_omc.py --weights ../model/OMC_mot20.pt 
                   --cfg ../experiments/model_set/CSTrack_l.yaml 
                   --name l-mot20-test 
                   --test_mot20 True 
                   --output_root runs/test_w_recheck
  • Note: If you want to test the performance of the model. Please sign up at MOT challange and test it.

☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️

Training

Prepare data and models

  1. Download the pretrained model which pretrain in COCO dataset [Google Drive][Baidu NetDisk(omct)] to $SOTS/weights.

  2. We provide several relevant datasets for training and evaluating the CSTrack. Annotations are provided in a unified format and all the datasets have the following structure:

Caltech
   |——————images
   |        └——————00001.jpg
   |        |—————— ...
   |        └——————0000N.jpg
   └——————labels_with_ids
            └——————00001.txt
            |—————— ...
            └——————0000N.txt

Every image has a corresponding annotation text. Given an image path, the annotation text path can be generated by replacing the string images with labels_with_ids and replacing .jpg with .txt.

In the annotation text, each line is describing a bounding box and has the following format:

[class] [identity] [x_center] [y_center] [width] [height]

The field [class] should be 0. Only single-class multi-object tracking is supported in this version.

The field [identity] is an integer from 0 to num_identities - 1, or -1 if this box has no identity annotation.

*Note that the values of [x_center] [y_center] [width] [height] are normalized by the width/height of the image, so they are floating point numbers ranging from 0 to 1.

The datasets including Caltech, CityPersons, CUHK-SYSU, PRW, ETHZ and MOT-17 follow JDE.

Caltech Pedestrian

Baidu NetDisk: [0] [1] [2] [3] [4] [5] [6] [7]

Google Drive: [annotations] , please download all the images .tar files from this page and unzip the images under Caltech/images

You may need this tool to convert the original data format to jpeg images. Original dataset webpage: CaltechPedestrians

CityPersons

Baidu NetDisk: [0] [1] [2] [3]

Google Drive: [0] [1] [2] [3]

Original dataset webpage: Citypersons pedestrian detection dataset

CUHK-SYSU

Baidu NetDisk: [0]

Google Drive: [0]

Original dataset webpage: CUHK-SYSU Person Search Dataset

PRW

Baidu NetDisk: [0]

Google Drive: [0]

Original dataset webpage: Person Search in the Wild datset

ETHZ (overlapping videos with MOT-16 removed):

Baidu NetDisk: [0]

Google Drive: [0]

Original dataset webpage: ETHZ pedestrian datset

MOT-17

Baidu NetDisk: [0]

Google Drive: [0]

Original dataset webpage: MOT-17

MOT-16 (for evaluation )

Baidu NetDisk: [0]

Google Drive: [0]

Original dataset webpage: MOT-16

CrowdHuman

The CrowdHuman dataset can be downloaded from their official webpage. The annotation text can be downloaded from the following Baidu NetDisk and Google Drive we provide.

Baidu NetDisk: [l77e]

Google Drive: [0]

Original dataset webpage: CrowdHuman

The CrowdHuman dataset has the following structure:

crowdhuman
   |——————images
            |——————train
            |        └——————00001.jpg
            |        |—————— ...
            |        └——————0000N.jpg
            |——————val
            |        └——————00001.jpg
            |        |—————— ...
            |        └——————0000N.jpg
   └——————labels_with_ids
            |——————train
            |        └——————00001.txt
            |        |—————— ...
            |        └——————0000N.txt
            |——————val
            |        └——————00001.txt
            |        |—————— ...
            |        └——————0000N.txt

Start training

  1. Modify scripts:Set the dataset path in line2 of $OMC/lib/dataset/mot/cfg/*.json.

  2. cd $OMC/tracking/

for MOT16/MOT17

1). First stage:CSTrack training

python train_omc.py --weights ../weights/yolov5l_coco.pt --data ../lib/dataset/mot/cfg/data_ch.json --name l-all --device 0

2). Second stage:Train with re-check network

python train_omc.py --weights ../runs/train/l-all/weights/best.pt --data ../lib/dataset/mot/cfg/mot17.json --project ../runs/train_w_recheck  --name l-mot17 --device 0 --recheck --noautoanchor

for MOT20

1). First stage:CSTrack training

python train_omc.py --weights ../runs/train/l-all/weights/best.pt  --data ../lib/dataset/mot/cfg/mot20.json --name l-mot20 --device 0

2). Second stage:Train with re-check network

python train_omc.py --weights ../runs/train/l-mot20/weights/best.pt --data ../lib/dataset/mot/cfg/mot20.json --project ../runs/train_w_recheck  --name l-mot20 --device 0 --recheck --noautoanchor

Training for own dataset

1). First stage

python train_omc.py --weights ../weights/yolov5l_coco.pt or ../model/OMC_mot17.pt
                    --data ../lib/dataset/mot/cfg/xx.json #training on your own dataset
                    --device 0
                    --batch_size 8 
                    --epochs 30     
                    --name project_name                                             

2). Second stage

python train_omc.py --recheck 
                    --noautoanchor
                    --weights ../runs/train/project_name/weights/best.pt
                    --data ../lib/dataset/mot/cfg/xx.json 
                    --device 0 

References

[1] Z. Wang, L. Zheng, et al. Towards real-time multi object tracking. ECCV2020.
[2] C. Liang, Z. Zhang, et al. Rethinking the Competition between Detection and ReID in Multi-Object Tracking. Arxiv2020.
[3] Yolov5. https://github.com/ultralytics/yolov5.