Skip to content

Commit

Permalink
[MMSIG] Add new configuration files for StyleGAN2 (#2057)
Browse files Browse the repository at this point in the history
* 1st

* debug

* 20230710 调整

* 调整代码,整合模型,避免editors import 过多class

* 调整代码,整合模型,避免editors import 过多class

* 支持 DeblurGANv2 inference

demo 示例
python mmagic\demo\mmagic_inference_demo.py --model-name deblurganv2 --model-comfig ../configs/deblurganv2/deblurganv2_fpn_inception.py --model-ckpt 权重文件路径 --img 测试图片路径 --device cpu --result-out-dir ./out.png

* 支持 DeblurGANv2 inference

fix CI test

* 支持 DeblurGANv2 inference

fix CI test

* 支持 DeblurGANv2 inference

fix CI test

* 支持 DeblurGANv2 inference

update model-index

* 支持 DeblurGANv2 inference

Fix CI Test

* 支持 DeblurGANv2 inference

CI test fix and update readme.md

* 支持 DeblurGANv2 inference

fix CI test and update readme.md

* 支持 DeblurGANv2 inference

Fix CI test

* 支持 DeblurGANv2 inference

yapf 修正

* 支持 DeblurGANv2 inference

代码调整,参数名保持一致

* 支持 DeblurGANv2 inference

doc string coverage

* 支持 DeblurGANv2 inference

add some doc string

* 支持 DeblurGANv2 inference

add some doc string

* 支持 DeblurGANv2 inference

* 支持 DeblurGANv2 inference

* 支持 DeblurGANv2 inference

* 支持 DeblurGANv2 inference

* 支持 DeblurGANv2 inference

add unit test

* 支持 DeblurGANv2 inference

add unit test

* 支持 DeblurGANv2 inference

add unit test

* 支持 DeblurGANv2 inference

fix unit test

* Update .gitignore

Co-authored-by: Yanhong Zeng <[email protected]>

* Update .gitignore

Co-authored-by: Yanhong Zeng <[email protected]>

* Update .gitignore

Co-authored-by: Yanhong Zeng <[email protected]>

* Update .gitignore

Co-authored-by: Yanhong Zeng <[email protected]>

* Update .gitignore

Co-authored-by: Yanhong Zeng <[email protected]>

* Update .gitignore

Co-authored-by: Yanhong Zeng <[email protected]>

* Update .gitignore

Co-authored-by: Yanhong Zeng <[email protected]>

* Update configs/deblurganv2/README.md

Co-authored-by: Yanhong Zeng <[email protected]>

* 支持 DeblurGANv2 inference

move the implementation of loss function to mmagic/models/losses
add quick start to readme

* 支持 DeblurGANv2 inference

fix unit test

* 支持 DeblurGANv2 inference

re run unit test

* Update configs/deblurganv2/deblurganv2_fpn-inception_1xb1_gopro.py

Co-authored-by: Yanhong Zeng <[email protected]>

* Update configs/deblurganv2/deblurganv2_fpn-inception_1xb1_gopro.py

Co-authored-by: Yanhong Zeng <[email protected]>

* Update configs/deblurganv2/deblurganv2_fpn-inception_1xb1_gopro.py

Co-authored-by: Yanhong Zeng <[email protected]>

* Update configs/deblurganv2/deblurganv2_fpn-inception_1xb1_gopro.py

Co-authored-by: Yanhong Zeng <[email protected]>

* Update configs/deblurganv2/deblurganv2_fpn-mobilenet_1xb1_gopro.py

Co-authored-by: Yanhong Zeng <[email protected]>

* Update configs/deblurganv2/deblurganv2_fpn-mobilenet_1xb1_gopro.py

Co-authored-by: Yanhong Zeng <[email protected]>

* Update configs/deblurganv2/deblurganv2_fpn-mobilenet_1xb1_gopro.py

Co-authored-by: Yanhong Zeng <[email protected]>

* Update configs/deblurganv2/deblurganv2_fpn-mobilenet_1xb1_gopro.py

Co-authored-by: Yanhong Zeng <[email protected]>

* 支持 DeblurGANv2 inference

fix some url and path
add README_zh-CN
add Deblurring task into mmagic/apis/inferencers/__init__.py

* Adding support for FastComposer

支持 FastComposer

* Adding support for FastComposer

Add some doc string and fix bugs

* Adding support for FastComposer

Fixed a bug

* Adding support for FastComposer

fix a bug

* Adding support for FastComposer

fix some bugs

* Adding support for FastComposer

fix some bugs

* Adding support for FastComposer

fix a bug

* Adding support for FastComposer

fix a bug

* Adding support for FastComposer

change for minimum version cpu check

* Adding support for FastComposer

avoid a windows CI bug which complains not enough memory.

* Adding support for FastComposer

rerun circleci check

* Adding support for FastComposer

add example code which run without gradio to readme
add config of clip for running unittest without using "clip_vit_url = 'openai/clip-vit-large-patch14' "

* Adding support for FastComposer

rerun checks of build cu102

* Adding support for FastComposer

a small change

* Adding support for FastComposer

some small changes

* Adding support for FastComposer

add some simple instructions to demo/README.md

* Adding support for FastComposer

resolve conflicts

* Adding support for FastComposer

rerun checks

* Adding support for FastComposer

 add device for running with cuda by default

* Adding support for Consistency Models

* Adding support for Consistency Models

* Update README.md

* Adding support for Consistency Models

* Adding support for Consistency Models

mdformat debug

* Adding support for Consistency Models

* Adding support for Consistency Models

add some doc string

* Adding support for Consistency Models

re run circle check

* Adding support for Consistency Models

rerun circle check

* [FIX] Check circle ci memory

add function teardown_module to test_fastcomposer

* Adding support for Consistency Models

rerun ci check

* Add new configuration files for StyleGAN2

* Revert "Add new configuration files for StyleGAN2"

This reverts commit a043b22.

* Add new configuration files for StyleGAN2

* fix config-vaildate error

* fix a bug

* delete code of consistency model

* delete code which in another pr

* delete code which in another pr

* Add new configuration files for StyleGAN2

rerun ci check

* ci check memory

---------

Co-authored-by: Yanhong Zeng <[email protected]>
Co-authored-by: rangoliu <[email protected]>
  • Loading branch information
3 people authored Dec 11, 2023
1 parent 73d7b01 commit cd183d9
Show file tree
Hide file tree
Showing 10 changed files with 619 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
# Copyright (c) OpenMMLab. All rights reserved.
from mmengine.config import read_base
from torch.optim import Adam

from mmagic.engine import VisualizationHook
from mmagic.evaluation import (FrechetInceptionDistance, PerceptualPathLength,
PrecisionAndRecall)
from mmagic.models import BaseGAN

with read_base():
from .._base_.datasets.ffhq_flip import * # noqa: F403,F405
from .._base_.gen_default_runtime import * # noqa: F403,F405
from .._base_.models.base_styleganv2 import * # noqa: F403,F405

# reg params
d_reg_interval = 16
g_reg_interval = 4

g_reg_ratio = g_reg_interval / (g_reg_interval + 1)
d_reg_ratio = d_reg_interval / (d_reg_interval + 1)

ema_half_life = 10. # G_smoothing_kimg

model.update(
generator=dict(out_size=256),
discriminator=dict(in_size=256),
ema_config=dict(
type=ExponentialMovingAverage,
interval=1,
momentum=1. - (0.5**(32. / (ema_half_life * 1000.)))),
loss_config=dict(
r1_loss_weight=10. / 2. * d_reg_interval,
r1_interval=d_reg_interval,
norm_mode='HWC',
g_reg_interval=g_reg_interval,
g_reg_weight=2. * g_reg_interval,
pl_batch_shrink=2))

train_cfg.update(max_iters=800002)

optim_wrapper.update(
generator=dict(
optimizer=dict(
type=Adam, lr=0.002 * g_reg_ratio, betas=(0, 0.99**g_reg_ratio))),
discriminator=dict(
optimizer=dict(
type=Adam, lr=0.002 * d_reg_ratio, betas=(0, 0.99**d_reg_ratio))))

batch_size = 4
data_root = './data/ffhq/ffhq_imgs/ffhq_256'

train_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

val_dataloader.update(batch_size=batch_size, dataset=dict(data_root=data_root))

test_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

# VIS_HOOK
custom_hooks = [
dict(
type=VisualizationHook,
interval=5000,
fixed_input=True,
vis_kwargs_list=dict(type=BaseGAN, name='fake_img'))
]

# METRICS
metrics = [
dict(
type=FrechetInceptionDistance,
prefix='FID-50k',
fake_nums=50000,
real_nums=50000,
inception_style='StyleGAN',
sample_model='ema'),
dict(type=PrecisionAndRecall, fake_nums=50000, prefix='PR-50K'),
dict(type=PerceptualPathLength, fake_nums=50000, prefix='ppl-w')
]
# NOTE: config for save multi best checkpoints
# default_hooks.update(
# checkpoint=dict(
# save_best=['FID-Full-50k/fid', 'IS-50k/is'],
# rule=['less', 'greater']))
default_hooks.update(checkpoint=dict(save_best='FID-50k/fid'))

val_evaluator.update(metrics=metrics)
test_evaluator.update(metrics=metrics)
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Copyright (c) OpenMMLab. All rights reserved.
from mmengine.config import read_base
from torch.optim import Adam

from mmagic.engine import VisualizationHook
from mmagic.evaluation import (FrechetInceptionDistance, PerceptualPathLength,
PrecisionAndRecall)
from mmagic.models import BaseGAN

with read_base():
from .._base_.datasets.lsun_stylegan import * # noqa: F403,F405
from .._base_.gen_default_runtime import * # noqa: F403,F405
from .._base_.models.base_styleganv2 import * # noqa: F403,F405

# reg params
d_reg_interval = 16
g_reg_interval = 4

g_reg_ratio = g_reg_interval / (g_reg_interval + 1)
d_reg_ratio = d_reg_interval / (d_reg_interval + 1)

ema_half_life = 10. # G_smoothing_kimg

model.update(
generator=dict(out_size=256),
discriminator=dict(in_size=256),
ema_config=dict(
type=ExponentialMovingAverage,
interval=1,
momentum=1. - (0.5**(32. / (ema_half_life * 1000.)))),
loss_config=dict(
r1_loss_weight=10. / 2. * d_reg_interval,
r1_interval=d_reg_interval,
norm_mode='HWC',
g_reg_interval=g_reg_interval,
g_reg_weight=2. * g_reg_interval,
pl_batch_shrink=2))

train_cfg.update(max_iters=800002)

optim_wrapper.update(
generator=dict(
optimizer=dict(
type=Adam, lr=0.002 * g_reg_ratio, betas=(0, 0.99**g_reg_ratio))),
discriminator=dict(
optimizer=dict(
type=Adam, lr=0.002 * d_reg_ratio, betas=(0, 0.99**d_reg_ratio))))

batch_size = 4
data_root = './data/lsun-cat'

train_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

val_dataloader.update(batch_size=batch_size, dataset=dict(data_root=data_root))

test_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

# VIS_HOOK
custom_hooks = [
dict(
type=VisualizationHook,
interval=5000,
fixed_input=True,
vis_kwargs_list=dict(type=BaseGAN, name='fake_img'))
]

# METRICS
metrics = [
dict(
type=FrechetInceptionDistance,
prefix='FID-Full-50k',
fake_nums=50000,
inception_style='StyleGAN',
sample_model='ema'),
dict(type=PrecisionAndRecall, fake_nums=50000, prefix='PR-50K'),
dict(type=PerceptualPathLength, fake_nums=50000, prefix='ppl-w')
]
# NOTE: config for save multi best checkpoints
# default_hooks.update(
# checkpoint=dict(
# save_best=['FID-Full-50k/fid', 'IS-50k/is'],
# rule=['less', 'greater']))
default_hooks.update(checkpoint=dict(save_best='FID-Full-50k/fid'))

val_evaluator.update(metrics=metrics)
test_evaluator.update(metrics=metrics)
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Copyright (c) OpenMMLab. All rights reserved.
from mmengine.config import read_base
from torch.optim import Adam

from mmagic.engine import VisualizationHook
from mmagic.evaluation import (FrechetInceptionDistance, PerceptualPathLength,
PrecisionAndRecall)
from mmagic.models import BaseGAN

with read_base():
from .._base_.datasets.lsun_stylegan import * # noqa: F403,F405
from .._base_.gen_default_runtime import * # noqa: F403,F405
from .._base_.models.base_styleganv2 import * # noqa: F403,F405

# reg params
d_reg_interval = 16
g_reg_interval = 4

g_reg_ratio = g_reg_interval / (g_reg_interval + 1)
d_reg_ratio = d_reg_interval / (d_reg_interval + 1)

ema_half_life = 10. # G_smoothing_kimg

model.update(
generator=dict(out_size=256),
discriminator=dict(in_size=256),
ema_config=dict(
type=ExponentialMovingAverage,
interval=1,
momentum=1. - (0.5**(32. / (ema_half_life * 1000.)))),
loss_config=dict(
r1_loss_weight=10. / 2. * d_reg_interval,
r1_interval=d_reg_interval,
norm_mode='HWC',
g_reg_interval=g_reg_interval,
g_reg_weight=2. * g_reg_interval,
pl_batch_shrink=2))

train_cfg.update(max_iters=800002)

optim_wrapper.update(
generator=dict(
optimizer=dict(
type=Adam, lr=0.002 * g_reg_ratio, betas=(0, 0.99**g_reg_ratio))),
discriminator=dict(
optimizer=dict(
type=Adam, lr=0.002 * d_reg_ratio, betas=(0, 0.99**d_reg_ratio))))

batch_size = 4
data_root = './data/lsun-church'

train_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

val_dataloader.update(batch_size=batch_size, dataset=dict(data_root=data_root))

test_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

# VIS_HOOK
custom_hooks = [
dict(
type=VisualizationHook,
interval=5000,
fixed_input=True,
vis_kwargs_list=dict(type=BaseGAN, name='fake_img'))
]

# METRICS
metrics = [
dict(
type=FrechetInceptionDistance,
prefix='FID-Full-50k',
fake_nums=50000,
inception_style='StyleGAN',
sample_model='ema'),
dict(type=PrecisionAndRecall, fake_nums=50000, prefix='PR-50K'),
dict(type=PerceptualPathLength, fake_nums=50000, prefix='ppl-w')
]
# NOTE: config for save multi best checkpoints
# default_hooks.update(
# checkpoint=dict(
# save_best=['FID-Full-50k/fid', 'IS-50k/is'],
# rule=['less', 'greater']))
default_hooks.update(checkpoint=dict(save_best='FID-Full-50k/fid'))

val_evaluator.update(metrics=metrics)
test_evaluator.update(metrics=metrics)
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Copyright (c) OpenMMLab. All rights reserved.
from mmengine.config import read_base
from torch.optim import Adam

from mmagic.engine import VisualizationHook
from mmagic.evaluation import (FrechetInceptionDistance, PerceptualPathLength,
PrecisionAndRecall)
from mmagic.models import BaseGAN

with read_base():
from .._base_.datasets.lsun_stylegan import * # noqa: F403,F405
from .._base_.gen_default_runtime import * # noqa: F403,F405
from .._base_.models.base_styleganv2 import * # noqa: F403,F405

# reg params
d_reg_interval = 16
g_reg_interval = 4

g_reg_ratio = g_reg_interval / (g_reg_interval + 1)
d_reg_ratio = d_reg_interval / (d_reg_interval + 1)

ema_half_life = 10. # G_smoothing_kimg

model.update(
generator=dict(out_size=256),
discriminator=dict(in_size=256),
ema_config=dict(
type=ExponentialMovingAverage,
interval=1,
momentum=1. - (0.5**(32. / (ema_half_life * 1000.)))),
loss_config=dict(
r1_loss_weight=10. / 2. * d_reg_interval,
r1_interval=d_reg_interval,
norm_mode='HWC',
g_reg_interval=g_reg_interval,
g_reg_weight=2. * g_reg_interval,
pl_batch_shrink=2))

train_cfg.update(max_iters=800002)

optim_wrapper.update(
generator=dict(
optimizer=dict(
type=Adam, lr=0.002 * g_reg_ratio, betas=(0, 0.99**g_reg_ratio))),
discriminator=dict(
optimizer=dict(
type=Adam, lr=0.002 * d_reg_ratio, betas=(0, 0.99**d_reg_ratio))))

batch_size = 4
data_root = './data/lsun-horse'

train_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

val_dataloader.update(batch_size=batch_size, dataset=dict(data_root=data_root))

test_dataloader.update(
batch_size=batch_size, dataset=dict(data_root=data_root))

# VIS_HOOK
custom_hooks = [
dict(
type=VisualizationHook,
interval=5000,
fixed_input=True,
vis_kwargs_list=dict(type=BaseGAN, name='fake_img'))
]

# METRICS
metrics = [
dict(
type=FrechetInceptionDistance,
prefix='FID-Full-50k',
fake_nums=50000,
inception_style='StyleGAN',
sample_model='ema'),
dict(type=PrecisionAndRecall, fake_nums=50000, prefix='PR-50K'),
dict(type=PerceptualPathLength, fake_nums=50000, prefix='ppl-w')
]
# NOTE: config for save multi best checkpoints
# default_hooks.update(
# checkpoint=dict(
# save_best=['FID-Full-50k/fid', 'IS-50k/is'],
# rule=['less', 'greater']))
default_hooks.update(checkpoint=dict(save_best='FID-Full-50k/fid'))

val_evaluator.update(metrics=metrics)
test_evaluator.update(metrics=metrics)
Loading

0 comments on commit cd183d9

Please sign in to comment.