You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Previously, our preprocessing script saved all training images and validation images as JPGs with a high quality factor of Q=95, downscaled by a factor 0.75. It turns out that the resulting images have a specific enough distribution that the neural network picks up on it, and the images are also easier to compress for the non-learned codecs.
For correctness, we have thus re-created the training and validation sets. The new preprocessing script is available in the repo. The important differences are:
All images are saved as PNGs.
We do not rescale validation sets in any way, and instead divide the images into crops such that everything fits into memory. Note that this is a bias against our method, since more context can only help. We only crop images too big to fit into our GPU (TITAN X Pascal). Please see the updated README.
For the training set, we use a random downscaling factor, instead of fixed 0.75x: this provides a wider variety of downscaling artefacts.
Additionally, we use the Lanczos filter, as we found that Bicubic also introduces specific artefacts.
This causes all results to shift, however, as before, we still outperform WebP, JPEG-2000, and PNG, i.e. the ordering of the methods according to bpp remains unchanged.
We evaluated our model on 500 images randomly selected from the Open Images validation set, and preprocessed like the training data. To compare, please download Open Images evaluation set here.
Previously, our preprocessing script saved all training images and validation images as JPGs with a high quality factor of Q=95, downscaled by a factor 0.75. It turns out that the resulting images have a specific enough distribution that the neural network picks up on it, and the images are also easier to compress for the non-learned codecs.
For correctness, we have thus re-created the training and validation sets. The new preprocessing script is available in the repo. The important differences are:
This causes all results to shift, however, as before, we still outperform WebP, JPEG-2000, and PNG, i.e. the ordering of the methods according to bpp remains unchanged.
We evaluated our model on 500 images randomly selected from the Open Images validation set, and preprocessed like the training data. To compare, please download Open Images evaluation set here.
Updated ArXiv
Available here https://arxiv.org/abs/1811.12817v3.
New Results
Status
Merged into master.
The text was updated successfully, but these errors were encountered: