Skip to content

Commit

Permalink
Update readme for NTU
Browse files Browse the repository at this point in the history
Signed-off-by: Diogo Luvizon <[email protected]>
  • Loading branch information
dluvizon committed Dec 4, 2018
1 parent cb55c3b commit c539c56
Show file tree
Hide file tree
Showing 2 changed files with 29 additions and 0 deletions.
19 changes: 19 additions & 0 deletions INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,13 @@ Install required python packages before you continue:
We do not provide public datasets within this software. We only provide
converted annotation files and some useful scripts for practical purposes.

### MPII

Images from MPII should be manually downloaded and placed
at `datasets/MPII/images`.

### Human3.6M

Videos from Human3.6M should be manually downloaded and placed
in `datasets/Human3.6M/S*`, e.g. S1, S2, S3, etc. for each subject.
After that, extract videos with:
Expand All @@ -33,7 +37,22 @@ After that, extract videos with:
```
Python2 is used here due to the dependency on cv2 package.

### PennAction

Video frames from PennAction should be manually downloaded and extracted
in `datasets/PennAction/frames`. The pose annotations and predicted bounding
boxes will be automatically downloaded by this software.

### NTU

Video frames from NTU should be also manually extracted.
A Python [script](datasets/NTU/extract-resize-videos.py) is provided to help in
this task. Python 2 is required.

Additional pose annotation is provided for NTU, which is used to train the pose
estimation part for this dataset. It is different from the original Kinect
poses, since it is a composition of 2D coordinates in RGB frames plus depth.
This additional annotation can be downloaded
[here](https://drive.google.com/open?id=1eTJPb8q2XCRK8NEC4h17p17JW2DDNwjG)
(2GB from Google Drive).

10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,16 @@ To reproduce our scores, do:
python3 exp/pennaction/eval_penn_ar_pe_merge.py output/eval-penn
```

### 3D action recognition on NTU

For 3D action recognition, the pose estimation model was trained on mixed
data from MPII, Human3.6 and NTU, and the full model for action recognition was
trained and fine-tuned on NTU only.
To reproduce our scores, do:
```
python3 exp/ntu/eval_ntu_ar_pe_merge.py
```


## Citing

Expand Down

0 comments on commit c539c56

Please sign in to comment.