This repository is used for object detection. The algorithm is based on YOLOv3: An Incremental Improvement, implemented in PyTorch v0.4. Thanks to ayooshkathuria/pytorch-yolo-v3 and ultralytics/yolov3, based on their work, I re-implemented YOLO v3 in PyTorch for better readability and re-useablity.
Full version of update logs could be seen in issue #2
- (2018/10/10) Support training on VOC dataset.
- Python 3.6
- PyTorch 0.4.1
- CUDA (CPU is not supported)
- pycocoapi
- Download COCO detection dataset and annotions and provide full path to your downloaded dataset in
config.py
like below'coco': { 'train_imgs': '/home/data/coco/2017/train2017', 'train_anno': '/home/data/coco/2017/annotations/instances_train2017.json' }
- Download official pre-trained Darknet53 weights on ImageNet here, and move it to
checkpoint/darknet/darknet53.conv.74.weights
- Transform the weights to PyTorch readable file
0.ckpt
by running$ python transfer.py --dataset=coco --weights=darknet53.conv.74.weights
- Run
$ python train.py
- Implement your own dataset loading function in
dataset.py
. You should keep the interfaces similar to that indataset.py
. - Add your dataset in
prepare_dataset
function indataset.py
- Details can be viewed in
dataset.py
. This part requires some coding, and need to be imporved later.
Logging directory will be displayed when you run training scripts. You can visualize the training process by running
$ tensorboard --logdir path-to-your-logs
- Download COCO detection dataset and annotions and provide full path to your downloaded dataset in
config.py
like below'coco': { 'val_imgs': '/home/data/coco/2017/val2017', 'val_anno': '/home/data/coco/2017/annotations/instances_val2017.json', }
- Download official pretrained YOLO v3 weights here and move it to
checkpoint/darknet/yolov3-coco.weights
- Transform the weights to PyTorch readable file
checkpoint/coco/-1.-1.ckpt
by running$ python transfer.py --dataset=coco --weights=yolov3-coco.weights
- Evaluate on validation sets you specify in
config.py
and compute the mAP by running. Some validation detection examples will be save toassets/results
$ python evaluate.py
- Download official pretrained YOLO v3 weights here and move it to
checkpoint/darknet/yolov3-coco.weights
- Transform the weights to PyTorch readable file
checkpoint/coco/-1.-1.ckpt
by running$ python transfer.py --dataset=coco --weights=yolov3-coco.weights
- Specify the images folder in
config.py
demo = { 'images_dir': opj(ROOT, 'assets/imgs'), 'result_dir': opj(ROOT, 'assets/dets') }
- Detect your own images by running
$ python demo.py
mAP computation seems not very accruate
Test datasets | Training datasets | Resolution | Notes | mAP | FPS |
---|---|---|---|---|---|
COCO 2017 | 416 | official pretrained YOLO v3 weights | 63.4 | ||
COCO 2017 | 608 | paper results | 57.9 |
- Evaluation
-
Draw right bounding box - mAP re-implementated
-
VOC mAP implemented - COCO mAP implemented
-
-
- Training
-
Loss function implementation -
Visualize training process -
Use pre trained Darknet model to train on custom datasets -
Validation - Train COCO from scratch
- Train custom datasets from scratch
-
Learning rate scheduler -
Data augumentation
-
- General
- Generalize annotation format to VOC for every dataset
-
Multi-GPU support -
Memory use imporvements
- Series: YOLO object detector in PyTorch A very nice tutorial of YOLO v3
- ayooshkathuria/pytorch-yolo-v3 PyTorch implmentation of YOLO v3, with only evaluation part
- ultralytics/yolov3 PyTorch implmentation of YOLO v3, with both training and evaluation parts
- utkuozbulak/pytorch-custom-dataset-examples Example of PyTorch custom dataset