-
Hello @fab-jul, first of all - thank you very much for your great work! May I kindly ask you to release the raw rate-distortion data to reproduce Fig. 4, A10, A11 (HiFiC, Baseline (no GAN), M&S Hyperprior, BPG)? This would be very helpful to enable comparisons. Concerning the model evaluation, I have another question: How did you identify the optimal checkpoint? With GANs the final checkpoint is not necessarily the optimal one, I suppose. What is your experience here? It would be quite interesting if you could share some details about the checkpoint + evaluation policy used in your work. Kind regards, Nikolai |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 6 replies
-
Hi Nikolai, I can dig up all the raw numbers, how urgent is it? Are you submitting to NeurIPS? re. eval: we actually just pick the last checkpoint. We took some care to make the training semi-stable, see Table 1 in the paper, where we explored reducing across-run variation. The main problem we had was that FID is not always a reliable predictor of what a good model is, so it's hard to find the optimal one I would say Fabian |
Beta Was this translation helpful? Give feedback.
Hi Nikolai,
I can dig up all the raw numbers, how urgent is it? Are you submitting to NeurIPS?
re. eval: we actually just pick the last checkpoint. We took some care to make the training semi-stable, see Table 1 in the paper, where we explored reducing across-run variation. The main problem we had was that FID is not always a reliable predictor of what a good model is, so it's hard to find the optimal one I would say
Fabian