Group project component for Applied Computer Vision done in Columbia University for Spring 2023.
Work done by Jit Soon Foo, Jiang Guan, Xinwen Liu, Jiang Xian
This project looks at applying Super Resolution to three different datasets, the Aerial Image Dataset (AID), the fluorescent Microscopy dataset and the CelebA dataset.
This repository looks at application of the Enhanced Deep Residual Networks (EDSR) for Super Resolution on the Aerial Imagery Dataset (AID).
- Python, Google Colab
- Linkup between GCP, Google Colab and Google Drive: https://medium.com/@uditsaini/access-google-drive-and-mount-google-drive-to-colab-notebook-google-ccbca1691d31
- Matlab2022b: Used for evaluation of the models and the handcrafted control test.
- Codes: ./Codebase/AID
- Data: Download instructions in https://captain-whu.github.io/AID/. To place files in ./Data/AID/AIDoriginal
We studied a few degradations of the image using bicubic interpolation, from 1x, 2x, 4x, 8x, on a subset of the AID dataset.
Classes chosen:
(Train) ['MediumResidential','Park','Parking','School','Square'],
(Test) ['DenseResidential','SparseResidential']
-
Preprocessing (AID)
a) ./Codebase/AID/AID-DataPreprocessing.ipynb
Convert jpg to png, and construct lower resolution images of 2x and 4x.
b) ./Codebase/AID/AID-TrainTestSplit.ipynb
Splitting the data into Train, Valid, Test. Note a subset of the data is selected.
c) ./Codebase/AID/AID-checkfiles.ipynb
Checking the related folders to ensure all files are processed correctly. (Some possible issues with google drive due to lag).
d) ./Codebase/AID/SRGAN-TrainTestSplit.ipynb and AID-TrainTestSplitx8.ipynb
Preparing the data for SRGAN based on AID Train-Test split.
e) ./Codebase/AID/check_mean.m
(Matlab code) Used to check the mean and standard deviation of the training data. -
Training (EDSR) - Updated with latest links
e) 4x to 1x: ./Codebase/AID/AID-EDSR-04162023-4a.ipynb
b) 8x to 1x: ./Codebase/AID/AID-EDSR-04162023-4c.ipynb
c) 8x to 4x: ./Codebase/AID/AID-EDSR-04162023-4d.ipynb
d) 4x to 2x: ./Codebase/AID/AID-EDSR-04162023-4e.ipynb
e) 2x to 1x: ./Codebase/AID/AID-EDSR-04162023-4f.ipynb -
Results generation in multi stage
a) 4x -> 2x -> 1x: ./Codebase/AID/(Eval4b)AID-EDSR-04162023-4.ipynb
b) 8x -> 4x -> 2x -> 1x: ./Codebase/AID/(Eval4)AID-EDSR-04162023-4.ipynb -
Evaluation
a) ./Codebase/AID/(Quantitative4b)AID_EDSR_04162023_4.ipynb
Runs through each of the image set (Super-res, Low-res interpolated, High-res) and compare them to extract PSNR, LPIPS and SSIM.
b) ./Codebase/AID/QuantitativeResults_Consolidated.ipynb
Computes the mean and standard deviation of all extracted features.
c) ./Codebase/AID/compare_results.m
Compares visually the results in Matlab via a GUI.
d) Segment Anything Model (SAM) Demo in https://segment-anything.com/
Produced the result that you see below.
- Codes: ./Codebase/microscopy_code
- Data: Download data in https://doi.org/10.1038/s41592-018-0239-0
Preprocessing, EDSR/SRGAN training, evaluation are all in ./Codebase/microscopy_code
SR3 training, evaluation are all in ./Codebase/diffusion/diffusion_fluorescence_imaging.ipynb
Codes: ./Codebase/sragan/srgan_face_data.ipynb ./Codebase/diffusion/diffusion_celebA.ipynb
Data: Download data in https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
Preprocessing, SRGAN/SR3 training, evaluation are all in ./Codebase/sragan/srgan_face_data.ipynb ./Codebase/diffusion/diffusion_celebA.ipynb
Sample results for SRGAN:
Sample results for SR3:
Thanks for your interest in this topic!
[1] C. Wang, Awesome-Super-Resolution, github link https://github.com/ChaofWang/Awesome-Super-Resolution, updated 22 Apr 2023
[2] Z. Wang, J. Chen, S.C.H. Hoi, Deep Learning for Image Super-resolution: A Survey, arXiv: 1902.06068v2, 8 Feb 2020 .
[3] K. Karwowska and D. Wierzbicki, Using Super-Resolution Algorithms for Small Satellite Imagery: A Systematic Review, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, 2022
[4] C. Wang, Awesome-Super-Resolution datasets, github link https://github.com/ChaofWang/Awesome-Super-Resolution/blob/master/dataset.md, updated 12 Feb 2022.
[5] G-S. Xia, J. Hu, F. Hu, B. Shi, X. Bai, Y. Zhong, L. Zhang, AID: A Benchmark Dataset for Performance Evaluation of Aerial Scene Classification, Aug 2016
[6] B. Lim, S. Son, H. Kim, S. Nah, K.M. Lee, Enhanced Deep Residual Network for Single Image Super-Resolution, CVPR 2017, arXiv:1707.02921, July 2017 [7] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson et al., Segment Anything, Meta AI Research, FAIR, April 2023
[8] Wang, H. et al. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat. Methods 16, 103–110 (2019).
[9] Zhang, R., Isola, P., Efros, A. A., Shechtman, E. & Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2018).
[10] Liu, Z., Luo, P., Wang, X., & Tang, X. (2015, December). Deep Learning Face Attributes in the Wild. Proceedings of International Conference on Computer Vision (ICCV).
[11] Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D. J., & Norouzi, M. (2021). Image super-resolution via iterative refinement. ArXiv:2104. 07636. [12] Ho, J., Jain, A., & Abbeel, P. (2020). Denoising Diffusion Probabilistic Models. arXiv:2006.11239v2, Dec 2020.
[13] C. Ledig, L. Theis, F. Huszar, et al (2016). Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. arXiv:1609.04802, Sep 2016.
[14] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio (2014). Generative Adversarial Networks. arXiv.1406.2661, 2014.