We assume the root path is $OMC, e.g. /home/chaoliang/MOT
$conda_path
denotes your anaconda path, e.g. /home/chaoliang/anaconda3
conda create -n OMC python=3.8
source activate OMC
cd OMC/lib/tutorial/
pip install -r requirements.txt
- Download the testing model [[Google Drive]]https://drive.google.com/drive/folders/1lG-bwk22uJjUw5DBy92-h857qMXKJ1Ba?usp=sharing)[[Baidu NetDisk(omct)]](https://pan.baidu.com/s/1mqEFjZJ4Cz00Zy9erl7auw) to
$SOTS/model
. - Download testing data e.g. MOT-16 and put them in
$SOTS/dataset
. The dataset can be downloaded from their official webpage.
In root path $OMC/tracking
,
python test_omc.py --weights ../model/OMC_mot17.pt
--cfg ../experiments/model_set/CSTrack_l.yaml
--name l-mot16-test
--test_mot16 True
--output_root runs/test_w_recheck
python test_omc.py --weights ../model/OMC_mot17.pt
--cfg ../experiments/model_set/CSTrack_l.yaml
--name l-mot17-test
--test_mot17 True
--output_root runs/test_w_recheck
python test_omc.py --weights ../model/OMC_mot20.pt
--cfg ../experiments/model_set/CSTrack_l.yaml
--name l-mot20-test
--test_mot20 True
--output_root runs/test_w_recheck
- Note: If you want to test the performance of the model. Please sign up at MOT challange and test it.
☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️
-
Download the pretrained model which pretrain in COCO dataset [Google Drive][Baidu NetDisk(omct)] to
$SOTS/weights
. -
We provide several relevant datasets for training and evaluating the CSTrack. Annotations are provided in a unified format and all the datasets have the following structure:
Caltech
|——————images
| └——————00001.jpg
| |—————— ...
| └——————0000N.jpg
└——————labels_with_ids
└——————00001.txt
|—————— ...
└——————0000N.txt
Every image has a corresponding annotation text. Given an image path,
the annotation text path can be generated by replacing the string images
with labels_with_ids
and replacing .jpg
with .txt
.
In the annotation text, each line is describing a bounding box and has the following format:
[class] [identity] [x_center] [y_center] [width] [height]
The field [class]
should be 0
. Only single-class multi-object tracking is supported in this version.
The field [identity]
is an integer from 0
to num_identities - 1
, or -1
if this box has no identity annotation.
*Note that the values of [x_center] [y_center] [width] [height]
are normalized by the width/height of the image, so they are floating point numbers ranging from 0 to 1.
The datasets including Caltech, CityPersons, CUHK-SYSU, PRW, ETHZ and MOT-17 follow JDE.
Baidu NetDisk: [0] [1] [2] [3] [4] [5] [6] [7]
Google Drive: [annotations] ,
please download all the images .tar
files from this page and unzip the images under Caltech/images
You may need this tool to convert the original data format to jpeg images. Original dataset webpage: CaltechPedestrians
Baidu NetDisk: [0] [1] [2] [3]
Original dataset webpage: Citypersons pedestrian detection dataset
Baidu NetDisk: [0]
Google Drive: [0]
Original dataset webpage: CUHK-SYSU Person Search Dataset
Baidu NetDisk: [0]
Google Drive: [0]
Original dataset webpage: Person Search in the Wild datset
Baidu NetDisk: [0]
Google Drive: [0]
Original dataset webpage: ETHZ pedestrian datset
Baidu NetDisk: [0]
Google Drive: [0]
Original dataset webpage: MOT-17
Baidu NetDisk: [0]
Google Drive: [0]
Original dataset webpage: MOT-16
The CrowdHuman dataset can be downloaded from their official webpage. The annotation text can be downloaded from the following Baidu NetDisk and Google Drive we provide.
Baidu NetDisk: [l77e]
Google Drive: [0]
Original dataset webpage: CrowdHuman
The CrowdHuman dataset has the following structure:
crowdhuman
|——————images
|——————train
| └——————00001.jpg
| |—————— ...
| └——————0000N.jpg
|——————val
| └——————00001.jpg
| |—————— ...
| └——————0000N.jpg
└——————labels_with_ids
|——————train
| └——————00001.txt
| |—————— ...
| └——————0000N.txt
|——————val
| └——————00001.txt
| |—————— ...
| └——————0000N.txt
-
Modify scripts:Set the dataset path in line2 of
$OMC/lib/dataset/mot/cfg/*.json
. -
cd
$OMC/tracking/
1). First stage:CSTrack training
python train_omc.py --weights ../weights/yolov5l_coco.pt --data ../lib/dataset/mot/cfg/data_ch.json --name l-all --device 0
2). Second stage:Train with re-check network
python train_omc.py --weights ../runs/train/l-all/weights/best.pt --data ../lib/dataset/mot/cfg/mot17.json --project ../runs/train_w_recheck --name l-mot17 --device 0 --recheck --noautoanchor
1). First stage:CSTrack training
python train_omc.py --weights ../runs/train/l-all/weights/best.pt --data ../lib/dataset/mot/cfg/mot20.json --name l-mot20 --device 0
2). Second stage:Train with re-check network
python train_omc.py --weights ../runs/train/l-mot20/weights/best.pt --data ../lib/dataset/mot/cfg/mot20.json --project ../runs/train_w_recheck --name l-mot20 --device 0 --recheck --noautoanchor
1). First stage
python train_omc.py --weights ../weights/yolov5l_coco.pt or ../model/OMC_mot17.pt
--data ../lib/dataset/mot/cfg/xx.json #training on your own dataset
--device 0
--batch_size 8
--epochs 30
--name project_name
2). Second stage
python train_omc.py --recheck
--noautoanchor
--weights ../runs/train/project_name/weights/best.pt
--data ../lib/dataset/mot/cfg/xx.json
--device 0
[1] Z. Wang, L. Zheng, et al. Towards real-time multi object tracking. ECCV2020.
[2] C. Liang, Z. Zhang, et al. Rethinking the Competition between Detection and ReID in Multi-Object Tracking. Arxiv2020.
[3] Yolov5. https://github.com/ultralytics/yolov5.