We assume the root path is $SOTS, e.g. /home/zpzhang/SOTS
cd $SOTS/lib/tutorial
bash install.sh $conda_path SOTS
cd $SOTS
conda activate SOTS
python setup.py develop
$conda_path
denotes your anaconda path, e.g. /home/zpzhang/anaconda3
- [Optional] Install TensorRT according to the tutorial.
Note: we perform TensorRT evaluation on RTX2080 Ti and CUDA10.0. If you fail to install it, please use pytorch version.
- Download the pretrained PyTorch model and TensorRT model to
$SOTS/snapshot
. - Download json files of testing data and put them in
$SOTS/dataset
. - Download testing data e.g. VOT2019 and put them in
$SOTS/dataset
. Please download each data from their official websites, and the directories should be named likeVOT2019
,OTB2015
,GOT10K
,LASOT
.
In root path $SOTS
,
python tracking/test_ocean.py --arch Ocean --resume snapshot/OceanV.pth --dataset VOT2019
python lib/eval_toolkit/bin/eval.py --dataset_dir dataset --dataset VOT2019 --tracker_result_dir result/VOT2019 --trackers Ocean
You may test other datasets with our code. Please corresponds the provided pre-trained model --resume
and dataset --dataset
. See ocean_model.txt for their correspondences.
Testing video: twinnings
in OTB2015 (472 frames)
Testing GPU: RTX2080Ti
- TensorRT (149fps)
python tracking/test_ocean.py --arch OceanTRT --resume snapshot/OceanV.pth --dataset OTB2015 --video twinnings
- Pytorch (68fps)
python tracking/test_ocean.py --arch Ocean --resume snapshot/OceanV.pth --dataset OTB2015 --video twinnings
Note:
- TensorRT version of Ocean only supports 255 input.
- Current TensorRT does not well support some operations. We would continuously renew it following official TensorRT updating. If you want to test on the benchmark, please us the Pytorch version.
- If you want to use our code in a realistic product, our TensorRT code may help you.
☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️☁️
- Please download training data from GoogleDrive or BaiduDrive(urxq), and then put them in
$SOTS/data
- You could also refer to scripts in
$SOTS/lib/dataset/crop
to process your custom data. - For splited files in BaiduDrive, please use
cat got10k.tar.* | tar -zxv
to merge and unzip.
Please download the pretrained model on ImageNet here, and then put it in $SOTS/pretrain
.
Please modify the training settings in $SOTS/experiments/train/Ocean.yaml
. The default number of GPU and batch size in paper are 8 and 32 respectively.
In root path $SOTS,
python tracking/onekey.py
This script integrates train, epoch test and tune. It is suggested to run them one by one when you are not familiar with our whole framework (modify the key ISTRUE
in $SOTS/experiments/train/Ocean.yaml
). When you know this framework well, simply run this one-key script.