-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can not reproduct the result in Table 2. #16
Comments
I observed the FID of EDM fluctuates largely. Did you report the lowest FID among checkpoints? |
I found this in some papers that try to retrain Edm. I got the lowest FID(2.04053) at around 16w ckpt. |
@Zyriix To calculate FID we have to first generate, saying 50000 samples. The random seed can influence the generation process of these samples hence the FID value I guess |
Most works use the same sample seeds just like this repo: |
--arch=ncsnpp performs much better than VP architecture |
I run the following training command on 8 v100.
torchrun --standalone --nproc_per_node=8 train.py --outdir=training-runs
--data=datasets/cifar10-32x32.zip --cond=0 --arch=ddpmpp
This gives me a FID at 2.08169 which is still far from FID in the paper(1.97).
I think this may caused by the random seeds(Random init in the code).
Is it possible to share the seed for reprodcuting the result in table2?
Any suggestiones?
The text was updated successfully, but these errors were encountered: