This is the repository for paper "Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression" (arXiv-2307.06143).
By Jinglei Shi, Yihong Xu and Christine Guillemot
numpy=1.24.3
scipy=1.10.1
python=3.8.18
pytorch=1.11.0
omegaconf=2.3.0
matplotlib=3.7.2
hydra-core=1.3.2
torchvision=0.12.0
pytorch-lightning=2.0.9
The code for training, extraction of network weights, quantization and Huffman coding will be released upon paper acceptance.
We provide pretrained models for four synthetic HCI scenes (hci.zip), four realworld scenes captured using Lytro camera (lytro.zip), and three challenging scenes (challenging.zip). Users can download them via PanBaidu or Google Drive, then create a new folder named 'results' and put the unzipped files into it.
We offer seven models for each scene, including (
To launch the generation of the light fields (decoding), users should first configure the file 'batch_infer.py' as follows:
- Lines 41-43: Specify the target light fields to be generated along with their spatial resolution. For example: scene_list = [['boxes', 512, 512]].
- Lines 45-47: Define the target network architecture. For example: architecture_list = [[[50]*5, [1]*5]], this setting corresponds to the checkpoint 'checkpoints_c50_a1'.
After configuring the above settings, users can simply launch the simulation by executing:
python batch_infer.py
The generated light fields will be accessible in the folder 'results/lf_name/test_cxx_axx'.
Another two projects related to this work will be released soon, they are:
Feel free to use and cite them!
Please consider citing our work if you find it useful.
@article{shi2023learning,
title={Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression},
author={Jinglei Shi and Yihong Xu and Christine Guillemot},
journal={arXiv preprint arXiv:2307.06143},
year={2023}}
For any further question, please contact the author [email protected]