Run the script
000_install_windows.cmd
This installs a virual enviornment under ./venv
and installs the requirements from the requirements.txt
. If you are not a windows user, then you have to creat the env on you own and install e.g. onnxruntime-gpu
instead of onnxruntime-directml
.
Run the script
001_download_testdata.cmd
which downloads the dataset from Kaggle and stores it into ./images/input_raw/blur-dataset.zip
. You can also download it by yourself and store it there.
You have to activate the environment by calling .\venv\scripts\activate
on commandline before running the scripts.
Run
python 010_prepare_testdata.py
which unzips the dataset to ./images/input_raw/
and creates three directories with images files from the dataset:
./images/input/0
with blurry images (quality "0")./images/input/100
with sharp images (quality "100")./images/test
with test images (sharp and blurry images)
If you want to train you own data you only need to copy your images into these directories. You can also use a finer classficition by creating aditional directories (eg. ./images/input/50
) with images that are only a little bit blurry.
Run
python 015_squeeze.py
which squeezes the images into a size of 480x270 px. The squeeze function copies part of the image in (almost) the original resolution and shrinks other parts of the images by resizing them. The Squeezed images are stored under ./images/prepared
.
Run
python 020_train.py
to train the classifier. You can stop the training when the loss does not improve anymore. The best model is saved in ./model
.
Run
python 030_predict_torch.py
which runs the model on the test-dataset stored in ./images/test
and saves the images to ./images/predicted_torch
.
Run
python 025_export_onnx.py
for exporting the model to onnx. The model is saved in ./model
.
Run
python 031_predict_onnx.py
which runs the model on the test-dataset stored in ./images/test
and saves the images to ./images/predicted_onnx
.