Skip to content

Commit

Permalink
Add new configuration files for StyleGAN2
Browse files Browse the repository at this point in the history
  • Loading branch information
xiaomile committed Oct 27, 2023
1 parent 588a979 commit a043b22
Show file tree
Hide file tree
Showing 14 changed files with 39 additions and 39 deletions.
2 changes: 1 addition & 1 deletion .dev_scripts/test_benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ cases:
params:
checkpoint: stylegan2_c2_ffhq_256_b4x8_20210407_160709-7890ae1f.pth
checkpoint_url: https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_ffhq_256_b4x8_20210407_160709-7890ae1f.pth
config: configs/styleganv2/stylegan2_c2-PL_8xb4-fp16-partial-GD-no-scaler-800kiters_ffhq-256x256.py
config: configs/styleganv2/stylegan2_c2_PL_8xb4_fp16_partial_GD_no_scaler_800kiters_ffhq_256x256.py
cpus_per_node: 4
gpus: 8
gpus_per_node: 8
Expand Down
2 changes: 1 addition & 1 deletion .dev_scripts/train_benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ cases:
params:
checkpoint: stylegan2_c2_ffhq_256_b4x8_20210407_160709-7890ae1f.pth
checkpoint_url: https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_ffhq_256_b4x8_20210407_160709-7890ae1f.pth
config: configs/styleganv2/stylegan2_c2-PL_8xb4-fp16-partial-GD-no-scaler-800kiters_ffhq-256x256.py
config: configs/styleganv2/stylegan2_c2_PL_8xb4_fp16_partial_GD_no_scaler_800kiters_ffhq_256x256.py
cpus_per_node: 4
gpus: 8
gpus_per_node: 8
Expand Down
30 changes: 15 additions & 15 deletions configs/styleganv2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,14 +28,14 @@ The style-based GAN architecture (StyleGAN) yields state-of-the-art results in d

| Model | Dataset | Comment | FID50k | Precision50k | Recall50k | Download |
| :----------------------------------------------------------------------: | :---------: | :-------------: | :----: | :----------: | :-------: | :-------------------------------------------------------------------------: |
| [stylegan2_c2_8xb4_ffhq-1024x1024](./stylegan2_c2_8xb4_ffhq-1024x1024.py) | FFHQ | official weight | 2.8134 | 62.856 | 49.400 | [model](https://download.openmmlab.com/mmediting/stylegan2/official_weights/stylegan2-ffhq-config-f-official_20210327_171224-bce9310c.pth) |
| [stylegan2_c2_8xb4_lsun-car-384x512](./stylegan2_c2_8xb4_lsun-car-384x512.py) | LSUN_CAR | official weight | 5.4316 | 65.986 | 48.190 | [model](https://download.openmmlab.com/mmediting/stylegan2/official_weights/stylegan2-car-config-f-official_20210327_172340-8cfe053c.pth) |
| [stylegan2_c2_8xb4-800kiters_lsun-horse-256x256](./stylegan2_c2_8xb4-800kiters_lsun-horse-256x256.py) | LSUN_HORSE | official weight | - | - | - | [model](https://download.openmmlab.com/mmediting/stylegan2/official_weights/stylegan2-horse-config-f-official_20210327_173203-ef3e69ca.pth) |
| [stylegan2_c2_8xb4-800kiters_lsun-church-256x256](./stylegan2_c2_8xb4-800kiters_lsun-church-256x256.py) | LSUN_CHURCH | official weight | - | - | - | [model](https://download.openmmlab.com/mmediting/stylegan2/official_weights/stylegan2-church-config-f-official_20210327_172657-1d42b7d1.pth) |
| [stylegan2_c2_8xb4-800kiters_lsun-cat-256x256](./stylegan2_c2_8xb4-800kiters_lsun-cat-256x256.py) | LSUN_CAT | official weight | - | - | - | [model](https://download.openmmlab.com/mmediting/stylegan2/official_weights/stylegan2-cat-config-f-official_20210327_172444-15bc485b.pth) |
| [stylegan2_c2_8xb4-800kiters_ffhq-256x256](./stylegan2_c2_8xb4-800kiters_ffhq-256x256.py) | FFHQ | our training | 3.992 | 69.012 | 40.417 | [model](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_ffhq_256_b4x8_20210407_160709-7890ae1f.pth) |
| [stylegan2_c2_8xb4_ffhq-1024x1024](./stylegan2_c2_8xb4_ffhq-1024x1024.py) | FFHQ | our training | 2.8185 | 68.236 | 49.583 | [model](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_ffhq_1024_b4x8_20210407_150045-618c9024.pth) |
| [stylegan2_c2_8xb4_lsun-car-384x512](./stylegan2_c2_8xb4_lsun-car-384x512.py) | LSUN_CAR | our training | 2.4116 | 66.760 | 50.576 | [model](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_lsun-car_384x512_b4x8_1800k_20210424_160929-fc9072ca.pth) |
| [stylegan2_c2_8xb4_ffhq-1024x1024](./stylegan2_c2_8xb4_ffhq_1024x1024.py) | FFHQ | official weight | 2.8134 | 62.856 | 49.400 | [model](https://download.openmmlab.com/mmediting/stylegan2/official_weights/stylegan2-ffhq-config-f-official_20210327_171224-bce9310c.pth) |
| [stylegan2_c2_8xb4_lsun-car-384x512](./stylegan2_c2_8xb4_lsun_car_384x512.py) | LSUN_CAR | official weight | 5.4316 | 65.986 | 48.190 | [model](https://download.openmmlab.com/mmediting/stylegan2/official_weights/stylegan2-car-config-f-official_20210327_172340-8cfe053c.pth) |
| [stylegan2_c2_8xb4-800kiters_lsun-horse-256x256](./stylegan2_c2_8xb4_800kiters_lsun_horse_256x256.py) | LSUN_HORSE | official weight | - | - | - | [model](https://download.openmmlab.com/mmediting/stylegan2/official_weights/stylegan2-horse-config-f-official_20210327_173203-ef3e69ca.pth) |
| [stylegan2_c2_8xb4-800kiters_lsun-church-256x256](./stylegan2_c2_8xb4_800kiters_lsun_church_256x256.py) | LSUN_CHURCH | official weight | - | - | - | [model](https://download.openmmlab.com/mmediting/stylegan2/official_weights/stylegan2-church-config-f-official_20210327_172657-1d42b7d1.pth) |
| [stylegan2_c2_8xb4-800kiters_lsun-cat-256x256](./stylegan2_c2_8xb4_800kiters_lsun_cat_256x256.py) | LSUN_CAT | official weight | - | - | - | [model](https://download.openmmlab.com/mmediting/stylegan2/official_weights/stylegan2-cat-config-f-official_20210327_172444-15bc485b.pth) |
| [stylegan2_c2_8xb4-800kiters_ffhq-256x256](./stylegan2_c2_8xb4_800kiters_ffhq_256x256.py) | FFHQ | our training | 3.992 | 69.012 | 40.417 | [model](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_ffhq_256_b4x8_20210407_160709-7890ae1f.pth) |
| [stylegan2_c2_8xb4_ffhq-1024x1024](./stylegan2_c2_8xb4_ffhq_1024x1024.py) | FFHQ | our training | 2.8185 | 68.236 | 49.583 | [model](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_ffhq_1024_b4x8_20210407_150045-618c9024.pth) |
| [stylegan2_c2_8xb4_lsun-car-384x512](./stylegan2_c2_8xb4_lsun_car_384x512.py) | LSUN_CAR | our training | 2.4116 | 66.760 | 50.576 | [model](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_lsun-car_384x512_b4x8_1800k_20210424_160929-fc9072ca.pth) |

## FP16 Support and Experiments

Expand All @@ -49,16 +49,16 @@ Currently, we have supported FP16 training for StyleGAN2, and here are the resul

As shown in the figure, we provide **3** ways to do mixed-precision training for `StyleGAN2`:

- [stylegan2_c2_fp16_PL-no-scaler](./stylegan2_c2-PL_8xb4-fp16-partial-GD-no-scaler-800kiters_ffhq-256x256.py): In this setting, we try our best to follow the official FP16 implementation in [StyleGAN2-ADA](https://github.com/NVlabs/stylegan2-ada). Similar to the official version, we only adopt FP16 training for the higher-resolution feature maps (the last 4 stages in G and the first 4 stages). Note that we do not adopt the `clamp` way to avoid gradient overflow used in the official implementation. We use the `autocast` function from `torch.cuda.amp` package.
- [stylegan2_c2_fp16-globalG-partialD_PL-R1-no-scaler](./stylegan2_c2-PL-R1_8xb4-fp16-globalG-partialD-no-scaler-800kiters_ffhq-256x256.py): In this config, we try to adopt mixed-precision training for the whole generator, but in partial discriminator (the first 4 higher-resolution stages). Note that we do not apply the loss scaler in the path length loss and gradient penalty loss. Because we always meet divergence after adopting the loss scaler to scale the gradient in these two losses.
- [stylegan2_c2_apex_fp16_PL-R1-no-scaler](./stylegan2_c2-PL-R1_8xb4-apex-fp16-no-scaler-800kiters_ffhq-256x256.py): In this setting, we adopt the [APEX](https://github.com/NVIDIA/apex) toolkit to implement mixed-precision training with multiple loss/gradient scalers. In APEX, you can assign different loss scalers for the generator and the discriminator respectively. Note that we still ignore the gradient scaler in the path length loss and gradient penalty loss.
- [stylegan2_c2_fp16_PL-no-scaler](./stylegan2_c2_PL_8xb4_fp16_partial_GD_no_scaler_800kiters_ffhq_256x256.py): In this setting, we try our best to follow the official FP16 implementation in [StyleGAN2-ADA](https://github.com/NVlabs/stylegan2-ada). Similar to the official version, we only adopt FP16 training for the higher-resolution feature maps (the last 4 stages in G and the first 4 stages). Note that we do not adopt the `clamp` way to avoid gradient overflow used in the official implementation. We use the `autocast` function from `torch.cuda.amp` package.
- [stylegan2_c2_fp16-globalG-partialD_PL-R1-no-scaler](./stylegan2_c2_PL_R1_8xb4_fp16_globalG_partialD_no_scaler_800kiters_ffhq_256x256.py): In this config, we try to adopt mixed-precision training for the whole generator, but in partial discriminator (the first 4 higher-resolution stages). Note that we do not apply the loss scaler in the path length loss and gradient penalty loss. Because we always meet divergence after adopting the loss scaler to scale the gradient in these two losses.
- [stylegan2_c2_apex_fp16_PL-R1-no-scaler](./stylegan2_c2_PL_R1_8xb4_apex_fp16_no_scaler_800kiters_ffhq_256x256.py): In this setting, we adopt the [APEX](https://github.com/NVIDIA/apex) toolkit to implement mixed-precision training with multiple loss/gradient scalers. In APEX, you can assign different loss scalers for the generator and the discriminator respectively. Note that we still ignore the gradient scaler in the path length loss and gradient penalty loss.

| Model | Comment | Dataset | FID50k | Download |
| :----------------------------------------------------------------------: | :-------------------------------------: | :-----: | :----: | :--------------------------------------------------------------------------: |
| [stylegan2_c2_8xb4-800kiters_ffhq-256x256](./stylegan2_c2_8xb4-800kiters_ffhq-256x256.py) | baseline | FFHQ256 | 3.992 | [ckpt](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_ffhq_256_b4x8_20210407_160709-7890ae1f.pth) |
| [stylegan2_c2-PL_8xb4-fp16-partial-GD-no-scaler-800kiters_ffhq-256x256](./stylegan2_c2-PL_8xb4-fp16-partial-GD-no-scaler-800kiters_ffhq-256x256.py) | partial layers in fp16 | FFHQ256 | 4.331 | [ckpt](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_fp16_partial-GD_PL-no-scaler_ffhq_256_b4x8_800k_20210508_114854-dacbe4c9.pth) |
| [stylegan2_c2-PL-R1_8xb4-fp16-globalG-partialD-no-scaler-800kiters_ffhq-256x256](./stylegan2_c2-PL-R1_8xb4-fp16-globalG-partialD-no-scaler-800kiters_ffhq-256x256.py) | the whole G in fp16 | FFHQ256 | 4.362 | [ckpt](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_fp16-globalG-partialD_PL-R1-no-scaler_ffhq_256_b4x8_800k_20210508_114930-ef8270d4.pth) |
| [stylegan2_c2-PL-R1_8xb4-apex-fp16-no-scaler-800kiters_ffhq-256x256](./stylegan2_c2-PL-R1_8xb4-apex-fp16-no-scaler-800kiters_ffhq-256x256.py) | the whole G&D in fp16 + two loss scaler | FFHQ256 | 4.614 | [ckpt](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_apex_fp16_PL-R1-no-scaler_ffhq_256_b4x8_800k_20210508_114701-c2bb8afd.pth) |
| [stylegan2_c2_8xb4-800kiters_ffhq-256x256](./stylegan2_c2_8xb4_800kiters_ffhq_256x256.py) | baseline | FFHQ256 | 3.992 | [ckpt](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_ffhq_256_b4x8_20210407_160709-7890ae1f.pth) |
| [stylegan2_c2-PL_8xb4-fp16-partial-GD-no-scaler-800kiters_ffhq-256x256](./stylegan2_c2_PL_8xb4_fp16_partial_GD_no_scaler_800kiters_ffhq_256x256.py) | partial layers in fp16 | FFHQ256 | 4.331 | [ckpt](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_fp16_partial-GD_PL-no-scaler_ffhq_256_b4x8_800k_20210508_114854-dacbe4c9.pth) |
| [stylegan2_c2-PL-R1_8xb4-fp16-globalG-partialD-no-scaler-800kiters_ffhq-256x256](./stylegan2_c2_PL_R1_8xb4_fp16_globalG_partialD_no_scaler_800kiters_ffhq_256x256.py) | the whole G in fp16 | FFHQ256 | 4.362 | [ckpt](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_fp16-globalG-partialD_PL-R1-no-scaler_ffhq_256_b4x8_800k_20210508_114930-ef8270d4.pth) |
| [stylegan2_c2-PL-R1_8xb4-apex-fp16-no-scaler-800kiters_ffhq-256x256](./stylegan2_c2_PL_R1_8xb4_apex_fp16_no_scaler_800kiters_ffhq_256x256.py) | the whole G&D in fp16 + two loss scaler | FFHQ256 | 4.614 | [ckpt](https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_apex_fp16_PL-R1-no-scaler_ffhq_256_b4x8_800k_20210508_114701-c2bb8afd.pth) |

As shown in this table, `P&R50k_full` is the metric used in StyleGANv1 and StyleGANv2. `full` indicates that we use the whole dataset for extracting the real distribution, e.g., 70000 images in FFHQ dataset. However, adopting the VGG16 provided from [Tero](https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt) requires that your PyTorch version must fulfill `>=1.6.0`. Be careful about using the PyTorch's VGG16 to extract features, which will cause higher precision and recall.

Expand Down
36 changes: 18 additions & 18 deletions configs/styleganv2/metafile.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ Collections:
- unconditional gans
Year: 2020
Models:
- Config: configs/styleganv2/stylegan2_c2_8xb4_ffhq-1024x1024.py
- Config: configs/styleganv2/stylegan2_c2_8xb4_ffhq_1024x1024.py
In Collection: StyleGANv2
Name: stylegan2_c2_8xb4_ffhq-1024x1024
Name: stylegan2_c2_8xb4_ffhq_1024x1024
Results:
- Dataset: FFHQ
Metrics:
Expand All @@ -25,9 +25,9 @@ Models:
Recall50k: 49.583
Task: Unconditional GANs
Weights: https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_ffhq_1024_b4x8_20210407_150045-618c9024.pth
- Config: configs/styleganv2/stylegan2_c2_8xb4_lsun-car-384x512.py
- Config: configs/styleganv2/stylegan2_c2_8xb4_lsun_car_384x512.py
In Collection: StyleGANv2
Name: stylegan2_c2_8xb4_lsun-car-384x512
Name: stylegan2_c2_8xb4_lsun_car_384x512
Results:
- Dataset: LSUN_CAR
Metrics:
Expand All @@ -42,33 +42,33 @@ Models:
Recall50k: 50.576
Task: Unconditional GANs
Weights: https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_lsun-car_384x512_b4x8_1800k_20210424_160929-fc9072ca.pth
- Config: configs/styleganv2/stylegan2_c2_8xb4-800kiters_lsun-horse-256x256.py
- Config: configs/styleganv2/stylegan2_c2_8xb4_800kiters_lsun_horse_256x256.py
In Collection: StyleGANv2
Name: stylegan2_c2_8xb4-800kiters_lsun-horse-256x256
Name: stylegan2_c2_8xb4_800kiters_lsun_horse_256x256
Results:
- Dataset: LSUN_HORSE
Metrics: {}
Task: Unconditional GANs
Weights: https://download.openmmlab.com/mmediting/stylegan2/official_weights/stylegan2-horse-config-f-official_20210327_173203-ef3e69ca.pth
- Config: configs/styleganv2/stylegan2_c2_8xb4-800kiters_lsun-church-256x256.py
- Config: configs/styleganv2/stylegan2_c2_8xb4_800kiters_lsun_church_256x256.py
In Collection: StyleGANv2
Name: stylegan2_c2_8xb4-800kiters_lsun-church-256x256
Name: stylegan2_c2_8xb4_800kiters_lsun_church_256x256
Results:
- Dataset: LSUN_CHURCH
Metrics: {}
Task: Unconditional GANs
Weights: https://download.openmmlab.com/mmediting/stylegan2/official_weights/stylegan2-church-config-f-official_20210327_172657-1d42b7d1.pth
- Config: configs/styleganv2/stylegan2_c2_8xb4-800kiters_lsun-cat-256x256.py
- Config: configs/styleganv2/stylegan2_c2_8xb4_800kiters_lsun_cat_256x256.py
In Collection: StyleGANv2
Name: stylegan2_c2_8xb4-800kiters_lsun-cat-256x256
Name: stylegan2_c2_8xb4_800kiters_lsun_cat_256x256
Results:
- Dataset: LSUN_CAT
Metrics: {}
Task: Unconditional GANs
Weights: https://download.openmmlab.com/mmediting/stylegan2/official_weights/stylegan2-cat-config-f-official_20210327_172444-15bc485b.pth
- Config: configs/styleganv2/stylegan2_c2_8xb4-800kiters_ffhq-256x256.py
- Config: configs/styleganv2/stylegan2_c2_8xb4_800kiters_ffhq_256x256.py
In Collection: StyleGANv2
Name: stylegan2_c2_8xb4-800kiters_ffhq-256x256
Name: stylegan2_c2_8xb4_800kiters_ffhq_256x256
Results:
- Dataset: FFHQ
Metrics:
Expand All @@ -81,27 +81,27 @@ Models:
FID50k: 3.992
Task: Unconditional GANs
Weights: https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_ffhq_256_b4x8_20210407_160709-7890ae1f.pth
- Config: configs/styleganv2/stylegan2_c2-PL_8xb4-fp16-partial-GD-no-scaler-800kiters_ffhq-256x256.py
- Config: configs/styleganv2/stylegan2_c2_PL_8xb4_fp16_partial_GD_no_scaler_800kiters_ffhq_256x256.py
In Collection: StyleGANv2
Name: stylegan2_c2-PL_8xb4-fp16-partial-GD-no-scaler-800kiters_ffhq-256x256
Name: stylegan2_c2_PL_8xb4_fp16_partial_GD_no_scaler_800kiters_ffhq_256x256
Results:
- Dataset: FFHQ256
Metrics:
FID50k: 4.331
Task: Unconditional GANs
Weights: https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_fp16_partial-GD_PL-no-scaler_ffhq_256_b4x8_800k_20210508_114854-dacbe4c9.pth
- Config: configs/styleganv2/stylegan2_c2-PL-R1_8xb4-fp16-globalG-partialD-no-scaler-800kiters_ffhq-256x256.py
- Config: configs/styleganv2/stylegan2_c2_PL_R1_8xb4_fp16_globalG_partialD_no_scaler_800kiters_ffhq_256x256.py
In Collection: StyleGANv2
Name: stylegan2_c2-PL-R1_8xb4-fp16-globalG-partialD-no-scaler-800kiters_ffhq-256x256
Name: stylegan2_c2_PL_R1_8xb4_fp16_globalG_partialD_no_scaler_800kiters_ffhq_256x256
Results:
- Dataset: FFHQ256
Metrics:
FID50k: 4.362
Task: Unconditional GANs
Weights: https://download.openmmlab.com/mmediting/stylegan2/stylegan2_c2_fp16-globalG-partialD_PL-R1-no-scaler_ffhq_256_b4x8_800k_20210508_114930-ef8270d4.pth
- Config: configs/styleganv2/stylegan2_c2-PL-R1_8xb4-apex-fp16-no-scaler-800kiters_ffhq-256x256.py
- Config: configs/styleganv2/stylegan2_c2_PL_R1_8xb4_apex_fp16_no_scaler_800kiters_ffhq_256x256.py
In Collection: StyleGANv2
Name: stylegan2_c2-PL-R1_8xb4-apex-fp16-no-scaler-800kiters_ffhq-256x256
Name: stylegan2_c2_PL_R1_8xb4_apex_fp16_no_scaler_800kiters_ffhq_256x256
Results:
- Dataset: FFHQ256
Metrics:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Config for the `config-f` setting in StyleGAN2."""

_base_ = ['./stylegan2_c2_8xb4-800kiters_ffhq-256x256.py']
_base_ = ['./stylegan2_c2_8xb4_800kiters_ffhq_256x256.py']

model = dict(
generator=dict(out_size=256, num_fp16_scales=4),
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Config for the `config-f` setting in StyleGAN2."""

_base_ = ['./stylegan2_c2_8xb4-800kiters_ffhq-256x256.py']
_base_ = ['./stylegan2_c2_8xb4_800kiters_ffhq_256x256.py']

model = dict(loss_config=dict(r1_use_apex_amp=False, g_reg_use_apex_amp=False))

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Config for the `config-f` setting in StyleGAN2."""

_base_ = ['./stylegan2_c2_8xb4-800kiters_ffhq-256x256.py']
_base_ = ['./stylegan2_c2_8xb4_800kiters_ffhq_256x256.py']

model = dict(
generator=dict(out_size=256, fp16_enabled=True),
Expand Down
2 changes: 1 addition & 1 deletion docs/en/user_guides/useful_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,5 +67,5 @@ MMGeneration incorporates config mechanism to set parameters used for training a
An Example:

```shell
python tools/misc/print_config.py configs/styleganv2/stylegan2_c2-PL_8xb4-fp16-partial-GD-no-scaler-800kiters_ffhq-256x256.py
python tools/misc/print_config.py configs/styleganv2/stylegan2_c2_PL_8xb4_fp16_partial_GD_no_scaler_800kiters_ffhq_256x256.py
```

0 comments on commit a043b22

Please sign in to comment.