Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MMSIG] Add new configuration files for StyleGAN2 #2057

Merged
merged 106 commits into from
Dec 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
106 commits
Select commit Hold shift + click to select a range
feac9ba
1st
xiaomile Jun 2, 2023
b100239
debug
xiaomile Jun 21, 2023
b080671
20230710 调整
xiaomile Jul 10, 2023
bda8007
调整代码,整合模型,避免editors import 过多class
xiaomile Jul 17, 2023
e990639
调整代码,整合模型,避免editors import 过多class
xiaomile Jul 17, 2023
5f055e9
Merge branch 'main' of https://github.com/xiaomile/mmagic
xiaomile Jul 24, 2023
02a0619
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
cbbea41
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
e836c69
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
0be4fde
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
cafc451
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
0a9e274
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
61aadee
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
80bf698
支持 DeblurGANv2 inference
xiaomile Jul 24, 2023
33c7751
支持 DeblurGANv2 inference
xiaomile Jul 25, 2023
2b0bbf2
支持 DeblurGANv2 inference
xiaomile Jul 25, 2023
ee28acf
支持 DeblurGANv2 inference
xiaomile Jul 25, 2023
eab4f43
支持 DeblurGANv2 inference
xiaomile Jul 25, 2023
6623669
支持 DeblurGANv2 inference
xiaomile Jul 26, 2023
1e91d3f
支持 DeblurGANv2 inference
xiaomile Jul 26, 2023
25d133d
支持 DeblurGANv2 inference
xiaomile Jul 26, 2023
b4ae7b8
支持 DeblurGANv2 inference
xiaomile Jul 26, 2023
f828049
支持 DeblurGANv2 inference
xiaomile Jul 26, 2023
320ca05
支持 DeblurGANv2 inference
xiaomile Jul 26, 2023
b933ba9
支持 DeblurGANv2 inference
xiaomile Jul 27, 2023
b1ac1df
支持 DeblurGANv2 inference
xiaomile Jul 27, 2023
d333d56
支持 DeblurGANv2 inference
xiaomile Jul 27, 2023
dcb5987
Merge branch 'main' into main
xiaomile Jul 27, 2023
76187c3
支持 DeblurGANv2 inference
xiaomile Jul 27, 2023
1d0ced9
Merge branch 'main' into main
zengyh1900 Jul 28, 2023
8a1dadc
Update .gitignore
xiaomile Jul 30, 2023
49a45cc
Update .gitignore
xiaomile Jul 30, 2023
2bdfedf
Update .gitignore
xiaomile Jul 30, 2023
8dbf240
Update .gitignore
xiaomile Jul 30, 2023
420de7e
Update .gitignore
xiaomile Jul 30, 2023
8809347
Update .gitignore
xiaomile Jul 30, 2023
b9fe117
Update .gitignore
xiaomile Jul 30, 2023
3bd19e5
Update configs/deblurganv2/README.md
xiaomile Jul 30, 2023
84f9592
支持 DeblurGANv2 inference
xiaomile Aug 2, 2023
64800e5
Merge branch 'main' into main
xiaomile Aug 2, 2023
b262ca6
支持 DeblurGANv2 inference
xiaomile Aug 2, 2023
d4fb484
Merge branch 'main' of https://github.com/xiaomile/mmagic
xiaomile Aug 2, 2023
52fdc15
支持 DeblurGANv2 inference
xiaomile Aug 2, 2023
6856137
Update configs/deblurganv2/deblurganv2_fpn-inception_1xb1_gopro.py
xiaomile Aug 7, 2023
cb281d6
Update configs/deblurganv2/deblurganv2_fpn-inception_1xb1_gopro.py
xiaomile Aug 7, 2023
1211530
Update configs/deblurganv2/deblurganv2_fpn-inception_1xb1_gopro.py
xiaomile Aug 7, 2023
5de913e
Update configs/deblurganv2/deblurganv2_fpn-inception_1xb1_gopro.py
xiaomile Aug 7, 2023
8bd0803
Update configs/deblurganv2/deblurganv2_fpn-mobilenet_1xb1_gopro.py
xiaomile Aug 7, 2023
f18bb29
Update configs/deblurganv2/deblurganv2_fpn-mobilenet_1xb1_gopro.py
xiaomile Aug 7, 2023
8c97de1
Update configs/deblurganv2/deblurganv2_fpn-mobilenet_1xb1_gopro.py
xiaomile Aug 7, 2023
f21e10f
Update configs/deblurganv2/deblurganv2_fpn-mobilenet_1xb1_gopro.py
xiaomile Aug 7, 2023
98741ee
支持 DeblurGANv2 inference
xiaomile Aug 8, 2023
ce8a9b2
Merge branch 'open-mmlab:main' into main
xiaomile Aug 9, 2023
c2b4666
Merge branch 'open-mmlab:main' into main
xiaomile Aug 30, 2023
0fb88f2
Adding support for FastComposer
xiaomile Aug 30, 2023
4014991
Adding support for FastComposer
xiaomile Aug 31, 2023
21c0ce3
Adding support for FastComposer
xiaomile Aug 31, 2023
67d3bf3
Adding support for FastComposer
xiaomile Aug 31, 2023
7f51930
Adding support for FastComposer
xiaomile Aug 31, 2023
306cc83
Adding support for FastComposer
xiaomile Aug 31, 2023
1b47eae
Adding support for FastComposer
xiaomile Aug 31, 2023
b74e551
Adding support for FastComposer
xiaomile Aug 31, 2023
1ed33d2
Adding support for FastComposer
xiaomile Aug 31, 2023
6d0b8f9
Merge branch 'main' into main
xiaomile Sep 1, 2023
a987beb
Adding support for FastComposer
xiaomile Sep 1, 2023
69499a0
Merge branch 'main' into main
xiaomile Sep 1, 2023
254a71f
Merge branch 'main' of https://github.com/xiaomile/mmagic
xiaomile Sep 1, 2023
12f16e0
Adding support for FastComposer
xiaomile Sep 1, 2023
38b7efa
Merge branch 'main' into main
xiaomile Sep 4, 2023
7c5b905
Merge branch 'main' into main
xiaomile Sep 5, 2023
8901eb1
Adding support for FastComposer
xiaomile Sep 5, 2023
8f9647e
Adding support for FastComposer
xiaomile Sep 5, 2023
f74ef3d
Adding support for FastComposer
xiaomile Sep 6, 2023
c2111da
Adding support for FastComposer
xiaomile Sep 6, 2023
9e3781e
Adding support for FastComposer
xiaomile Sep 6, 2023
a4b3491
Merge branch 'main' into main
xiaomile Sep 7, 2023
f845fdf
Adding support for FastComposer
xiaomile Sep 7, 2023
312c5f0
Adding support for FastComposer
xiaomile Sep 7, 2023
4bbacbb
Adding support for FastComposer
xiaomile Sep 8, 2023
672ad69
Merge branch 'main' of https://github.com/xiaomile/mmagic
xiaomile Sep 25, 2023
b820313
Adding support for Consistency Models
xiaomile Oct 3, 2023
31b3298
Adding support for Consistency Models
xiaomile Oct 3, 2023
f34020e
Update README.md
xiaomile Oct 3, 2023
df005b4
Adding support for Consistency Models
xiaomile Oct 3, 2023
ce9a74d
Adding support for Consistency Models
xiaomile Oct 3, 2023
b372918
Adding support for Consistency Models
xiaomile Oct 3, 2023
2c4c010
Adding support for Consistency Models
xiaomile Oct 3, 2023
864b3aa
Adding support for Consistency Models
xiaomile Oct 7, 2023
a49cb59
Adding support for Consistency Models
xiaomile Oct 7, 2023
d6bfcfc
Merge branch 'main' into main
xiaomile Oct 11, 2023
3260c13
[FIX] Check circle ci memory
xiaomile Oct 13, 2023
55d5003
Merge branch 'main' into main
liuwenran Oct 18, 2023
83b6dfb
Merge branch 'main' into main
liuwenran Oct 19, 2023
2ddab89
Merge branch 'main' into main
xiaomile Oct 20, 2023
588a979
Adding support for Consistency Models
xiaomile Oct 20, 2023
a043b22
Add new configuration files for StyleGAN2
xiaomile Oct 27, 2023
bb082df
Revert "Add new configuration files for StyleGAN2"
xiaomile Oct 27, 2023
e07e710
Add new configuration files for StyleGAN2
xiaomile Oct 27, 2023
b581c2f
fix config-vaildate error
xiaomile Oct 27, 2023
d764531
fix a bug
xiaomile Oct 29, 2023
b62c71f
delete code of consistency model
xiaomile Oct 29, 2023
6af0ac3
delete code which in another pr
xiaomile Oct 29, 2023
aaa07c4
delete code which in another pr
xiaomile Oct 29, 2023
ea734f9
Add new configuration files for StyleGAN2
xiaomile Oct 30, 2023
9e675b4
ci check memory
xiaomile Oct 30, 2023
30a4254
Merge branch 'main' into main2
xiaomile Nov 27, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
# Copyright (c) OpenMMLab. All rights reserved.
from mmengine.config import read_base
from torch.optim import Adam

from mmagic.engine import VisualizationHook
from mmagic.evaluation import (FrechetInceptionDistance, PerceptualPathLength,
PrecisionAndRecall)
from mmagic.models import BaseGAN

with read_base():
from .._base_.datasets.ffhq_flip import * # noqa: F403,F405
from .._base_.gen_default_runtime import * # noqa: F403,F405
from .._base_.models.base_styleganv2 import * # noqa: F403,F405

# reg params
d_reg_interval = 16
g_reg_interval = 4

g_reg_ratio = g_reg_interval / (g_reg_interval + 1)
d_reg_ratio = d_reg_interval / (d_reg_interval + 1)

ema_half_life = 10. # G_smoothing_kimg

model.update(
generator=dict(out_size=256),
discriminator=dict(in_size=256),
ema_config=dict(
type=ExponentialMovingAverage,
interval=1,
momentum=1. - (0.5**(32. / (ema_half_life * 1000.)))),
loss_config=dict(
r1_loss_weight=10. / 2. * d_reg_interval,
r1_interval=d_reg_interval,
norm_mode='HWC',
g_reg_interval=g_reg_interval,
g_reg_weight=2. * g_reg_interval,
pl_batch_shrink=2))

train_cfg.update(max_iters=800002)

optim_wrapper.update(
generator=dict(
optimizer=dict(
type=Adam, lr=0.002 * g_reg_ratio, betas=(0, 0.99**g_reg_ratio))),
discriminator=dict(
optimizer=dict(
type=Adam, lr=0.002 * d_reg_ratio, betas=(0, 0.99**d_reg_ratio))))

batch_size = 4
data_root = './data/ffhq/ffhq_imgs/ffhq_256'

train_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

val_dataloader.update(batch_size=batch_size, dataset=dict(data_root=data_root))

test_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

# VIS_HOOK
custom_hooks = [
dict(
type=VisualizationHook,
interval=5000,
fixed_input=True,
vis_kwargs_list=dict(type=BaseGAN, name='fake_img'))
]

# METRICS
metrics = [
dict(
type=FrechetInceptionDistance,
prefix='FID-50k',
fake_nums=50000,
real_nums=50000,
inception_style='StyleGAN',
sample_model='ema'),
dict(type=PrecisionAndRecall, fake_nums=50000, prefix='PR-50K'),
dict(type=PerceptualPathLength, fake_nums=50000, prefix='ppl-w')
]
# NOTE: config for save multi best checkpoints
# default_hooks.update(
# checkpoint=dict(
# save_best=['FID-Full-50k/fid', 'IS-50k/is'],
# rule=['less', 'greater']))
default_hooks.update(checkpoint=dict(save_best='FID-50k/fid'))

val_evaluator.update(metrics=metrics)
test_evaluator.update(metrics=metrics)
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Copyright (c) OpenMMLab. All rights reserved.
from mmengine.config import read_base
from torch.optim import Adam

from mmagic.engine import VisualizationHook
from mmagic.evaluation import (FrechetInceptionDistance, PerceptualPathLength,
PrecisionAndRecall)
from mmagic.models import BaseGAN

with read_base():
from .._base_.datasets.lsun_stylegan import * # noqa: F403,F405
from .._base_.gen_default_runtime import * # noqa: F403,F405
from .._base_.models.base_styleganv2 import * # noqa: F403,F405

# reg params
d_reg_interval = 16
g_reg_interval = 4

g_reg_ratio = g_reg_interval / (g_reg_interval + 1)
d_reg_ratio = d_reg_interval / (d_reg_interval + 1)

ema_half_life = 10. # G_smoothing_kimg

model.update(
generator=dict(out_size=256),
discriminator=dict(in_size=256),
ema_config=dict(
type=ExponentialMovingAverage,
interval=1,
momentum=1. - (0.5**(32. / (ema_half_life * 1000.)))),
loss_config=dict(
r1_loss_weight=10. / 2. * d_reg_interval,
r1_interval=d_reg_interval,
norm_mode='HWC',
g_reg_interval=g_reg_interval,
g_reg_weight=2. * g_reg_interval,
pl_batch_shrink=2))

train_cfg.update(max_iters=800002)

optim_wrapper.update(
generator=dict(
optimizer=dict(
type=Adam, lr=0.002 * g_reg_ratio, betas=(0, 0.99**g_reg_ratio))),
discriminator=dict(
optimizer=dict(
type=Adam, lr=0.002 * d_reg_ratio, betas=(0, 0.99**d_reg_ratio))))

batch_size = 4
data_root = './data/lsun-cat'

train_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

val_dataloader.update(batch_size=batch_size, dataset=dict(data_root=data_root))

test_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

# VIS_HOOK
custom_hooks = [
dict(
type=VisualizationHook,
interval=5000,
fixed_input=True,
vis_kwargs_list=dict(type=BaseGAN, name='fake_img'))
]

# METRICS
metrics = [
dict(
type=FrechetInceptionDistance,
prefix='FID-Full-50k',
fake_nums=50000,
inception_style='StyleGAN',
sample_model='ema'),
dict(type=PrecisionAndRecall, fake_nums=50000, prefix='PR-50K'),
dict(type=PerceptualPathLength, fake_nums=50000, prefix='ppl-w')
]
# NOTE: config for save multi best checkpoints
# default_hooks.update(
# checkpoint=dict(
# save_best=['FID-Full-50k/fid', 'IS-50k/is'],
# rule=['less', 'greater']))
default_hooks.update(checkpoint=dict(save_best='FID-Full-50k/fid'))

val_evaluator.update(metrics=metrics)
test_evaluator.update(metrics=metrics)
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Copyright (c) OpenMMLab. All rights reserved.
from mmengine.config import read_base
from torch.optim import Adam

from mmagic.engine import VisualizationHook
from mmagic.evaluation import (FrechetInceptionDistance, PerceptualPathLength,
PrecisionAndRecall)
from mmagic.models import BaseGAN

with read_base():
from .._base_.datasets.lsun_stylegan import * # noqa: F403,F405
from .._base_.gen_default_runtime import * # noqa: F403,F405
from .._base_.models.base_styleganv2 import * # noqa: F403,F405

# reg params
d_reg_interval = 16
g_reg_interval = 4

g_reg_ratio = g_reg_interval / (g_reg_interval + 1)
d_reg_ratio = d_reg_interval / (d_reg_interval + 1)

ema_half_life = 10. # G_smoothing_kimg

model.update(
generator=dict(out_size=256),
discriminator=dict(in_size=256),
ema_config=dict(
type=ExponentialMovingAverage,
interval=1,
momentum=1. - (0.5**(32. / (ema_half_life * 1000.)))),
loss_config=dict(
r1_loss_weight=10. / 2. * d_reg_interval,
r1_interval=d_reg_interval,
norm_mode='HWC',
g_reg_interval=g_reg_interval,
g_reg_weight=2. * g_reg_interval,
pl_batch_shrink=2))

train_cfg.update(max_iters=800002)

optim_wrapper.update(
generator=dict(
optimizer=dict(
type=Adam, lr=0.002 * g_reg_ratio, betas=(0, 0.99**g_reg_ratio))),
discriminator=dict(
optimizer=dict(
type=Adam, lr=0.002 * d_reg_ratio, betas=(0, 0.99**d_reg_ratio))))

batch_size = 4
data_root = './data/lsun-church'

train_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

val_dataloader.update(batch_size=batch_size, dataset=dict(data_root=data_root))

test_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

# VIS_HOOK
custom_hooks = [
dict(
type=VisualizationHook,
interval=5000,
fixed_input=True,
vis_kwargs_list=dict(type=BaseGAN, name='fake_img'))
]

# METRICS
metrics = [
dict(
type=FrechetInceptionDistance,
prefix='FID-Full-50k',
fake_nums=50000,
inception_style='StyleGAN',
sample_model='ema'),
dict(type=PrecisionAndRecall, fake_nums=50000, prefix='PR-50K'),
dict(type=PerceptualPathLength, fake_nums=50000, prefix='ppl-w')
]
# NOTE: config for save multi best checkpoints
# default_hooks.update(
# checkpoint=dict(
# save_best=['FID-Full-50k/fid', 'IS-50k/is'],
# rule=['less', 'greater']))
default_hooks.update(checkpoint=dict(save_best='FID-Full-50k/fid'))

val_evaluator.update(metrics=metrics)
test_evaluator.update(metrics=metrics)
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Copyright (c) OpenMMLab. All rights reserved.
from mmengine.config import read_base
from torch.optim import Adam

from mmagic.engine import VisualizationHook
from mmagic.evaluation import (FrechetInceptionDistance, PerceptualPathLength,
PrecisionAndRecall)
from mmagic.models import BaseGAN

with read_base():
from .._base_.datasets.lsun_stylegan import * # noqa: F403,F405
from .._base_.gen_default_runtime import * # noqa: F403,F405
from .._base_.models.base_styleganv2 import * # noqa: F403,F405

# reg params
d_reg_interval = 16
g_reg_interval = 4

g_reg_ratio = g_reg_interval / (g_reg_interval + 1)
d_reg_ratio = d_reg_interval / (d_reg_interval + 1)

ema_half_life = 10. # G_smoothing_kimg

model.update(
generator=dict(out_size=256),
discriminator=dict(in_size=256),
ema_config=dict(
type=ExponentialMovingAverage,
interval=1,
momentum=1. - (0.5**(32. / (ema_half_life * 1000.)))),
loss_config=dict(
r1_loss_weight=10. / 2. * d_reg_interval,
r1_interval=d_reg_interval,
norm_mode='HWC',
g_reg_interval=g_reg_interval,
g_reg_weight=2. * g_reg_interval,
pl_batch_shrink=2))

train_cfg.update(max_iters=800002)

optim_wrapper.update(
generator=dict(
optimizer=dict(
type=Adam, lr=0.002 * g_reg_ratio, betas=(0, 0.99**g_reg_ratio))),
discriminator=dict(
optimizer=dict(
type=Adam, lr=0.002 * d_reg_ratio, betas=(0, 0.99**d_reg_ratio))))

batch_size = 4
data_root = './data/lsun-horse'

train_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

val_dataloader.update(batch_size=batch_size, dataset=dict(data_root=data_root))

test_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

# VIS_HOOK
custom_hooks = [
dict(
type=VisualizationHook,
interval=5000,
fixed_input=True,
vis_kwargs_list=dict(type=BaseGAN, name='fake_img'))
]

# METRICS
metrics = [
dict(
type=FrechetInceptionDistance,
prefix='FID-Full-50k',
fake_nums=50000,
inception_style='StyleGAN',
sample_model='ema'),
dict(type=PrecisionAndRecall, fake_nums=50000, prefix='PR-50K'),
dict(type=PerceptualPathLength, fake_nums=50000, prefix='ppl-w')
]
# NOTE: config for save multi best checkpoints
# default_hooks.update(
# checkpoint=dict(
# save_best=['FID-Full-50k/fid', 'IS-50k/is'],
# rule=['less', 'greater']))
default_hooks.update(checkpoint=dict(save_best='FID-Full-50k/fid'))

val_evaluator.update(metrics=metrics)
test_evaluator.update(metrics=metrics)
Loading