Optimization related configuration is now all managed by optim_wrapper
which usually has three fields: optimizer
, paramwise_cfg
, clip_grad
. Please refer to OptimWrapper for more details. See the example below, where AdamW
is used as an optimizer
, the learning rate of the backbone is reduced by a factor of 10, and gradient clipping is added.
optim_wrapper = dict(
type='OptimWrapper',
# optimizer
optimizer=dict(
type='AdamW',
lr=0.0001,
weight_decay=0.05,
eps=1e-8,
betas=(0.9, 0.999)),
# Parameter-level learning rate and weight decay settings
paramwise_cfg=dict(
custom_keys={
'backbone': dict(lr_mult=0.1, decay_mult=1.0),
},
norm_decay_mult=0.0),
# gradient clipping
clip_grad=dict(max_norm=0.01, norm_type=2))
We already support to use all the optimizers implemented by PyTorch, and the only modification is to change the optimizer
field in optim_wrapper
field of config files. For example, if you want to use Adam
(note that the performance could drop a lot), the modification could be as the following:
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='Adam', lr=0.0003, weight_decay=0.0001))
To modify the learning rate of the model, the users only need to modify the lr
in optimizer
. The users can directly set arguments following the API doc of PyTorch.
A customized optimizer could be defined as following:
Assume you want to add a optimizer named MyOptimizer
, which has arguments a
, b
, and c
.
You need to create a new directory named mmdet3d/engine/optimizers
, and then implement the new optimizer in a file, e.g., in mmdet3d/engine/optimizers/my_optimizer.py
:
from torch.optim import Optimizer
from mmdet3d.registry import OPTIMIZERS
@OPTIMIZERS.register_module()
class MyOptimizer(Optimizer):
def __init__(self, a, b, c):
pass
To find the above module defined above, this module should be imported into the main namespace at first. There are two options to achieve it.
-
Modify
mmdet3d/engine/optimizers/__init__.py
to import it.The newly defined module should be imported in
mmdet3d/engine/optimizers/__init__.py
so that the registry will find the new module and add it:from .my_optimizer import MyOptimizer
-
Use
custom_imports
in the config to manually import it.custom_imports = dict(imports=['mmdet3d.engine.optimizers.my_optimizer'], allow_failed_imports=False)
The module
mmdet3d.engine.optimizers.my_optimizer
will be imported at the beginning of the program and the classMyOptimizer
is then automatically registered. Note that only the package containing the classMyOptimizer
should be imported.mmdet3d.engine.optimizers.my_optimizer.MyOptimizer
cannot be imported directly.Actually users can use a totally different file directory structure with this importing method, as long as the module root is located in
PYTHONPATH
.
Then you can use MyOptimizer
in optimizer
field in optim_wrapper
field of config files. In the configs, the optimizers are defined by the field optimizer
like the following:
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001))
To use your own optimizer, the field can be changed to:
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='MyOptimizer', a=a_value, b=b_value, c=c_value))
Some models may have some parameter-specific settings for optimization, e.g. weight decay for BatchNorm layers. The users can do those fine-grained parameter tuning through customizing optimizer wrapper constructor.
from mmengine.optim import DefaultOptimWrapperConstructor
from mmdet3d.registry import OPTIM_WRAPPER_CONSTRUCTORS
from .my_optimizer import MyOptimizer
@OPTIM_WRAPPER_CONSTRUCTORS.register_module()
class MyOptimizerWrapperConstructor(DefaultOptimWrapperConstructor):
def __init__(self,
optim_wrapper_cfg: dict,
paramwise_cfg: Optional[dict] = None):
pass
def __call__(self, model: nn.Module) -> OptimWrapper:
return optim_wrapper
The default optimizer wrapper constructor is implemented here, which could also serve as a template for the new optimizer wrapper constructor.
Tricks not implemented by the optimizer should be implemented through optimizer wrapper constructor (e.g., set parameter-wise learning rates) or hooks. We list some common settings that could stabilize the training or accelerate the training. Feel free to create PR, issue for more settings.
-
Use gradient clip to stabilize training: Some models need gradient clip to clip the gradients to stabilize the training process. An example is as below:
optim_wrapper = dict( _delete_=True, clip_grad=dict(max_norm=35, norm_type=2))
If your config inherits the base config which already sets the
optim_wrapper
, you might need_delete_=True
to override the unnecessary settings. See the config documentation for more details. -
Use momentum schedule to accelerate model convergence: We support momentum scheduler to modify model's momentum according to learning rate, which could make the model converge in a faster way. Momentum scheduler is usually used with LR scheduler, for example, the following config is used in 3D detection to accelerate convergence. For more details, please refer to the implementation of CosineAnnealingLR and CosineAnnealingMomentum.
param_scheduler = [ # learning rate scheduler # During the first 8 epochs, learning rate increases from 0 to lr * 10 # during the next 12 epochs, learning rate decreases from lr * 10 to lr * 1e-4 dict( type='CosineAnnealingLR', T_max=8, eta_min=lr * 10, begin=0, end=8, by_epoch=True, convert_to_iter_based=True), dict( type='CosineAnnealingLR', T_max=12, eta_min=lr * 1e-4, begin=8, end=20, by_epoch=True, convert_to_iter_based=True), # momentum scheduler # During the first 8 epochs, momentum increases from 0 to 0.85 / 0.95 # during the next 12 epochs, momentum increases from 0.85 / 0.95 to 1 dict( type='CosineAnnealingMomentum', T_max=8, eta_min=0.85 / 0.95, begin=0, end=8, by_epoch=True, convert_to_iter_based=True), dict( type='CosineAnnealingMomentum', T_max=12, eta_min=1, begin=8, end=20, by_epoch=True, convert_to_iter_based=True) ]
By default we use step learning rate with 1x schedule, this calls MultiStepLR
in MMEngine.
We support many other learning rate schedule here, such as CosineAnnealingLR
and PolyLR
schedules. Here are some examples:
-
Poly schedule:
param_scheduler = [ dict( type='PolyLR', power=0.9, eta_min=1e-4, begin=0, end=8, by_epoch=True)]
-
CosineAnnealing schedule:
param_scheduler = [ dict( type='CosineAnnealingLR', T_max=8, eta_min=lr * 1e-5, begin=0, end=8, by_epoch=True)]
By default, EpochBasedTrainLoop
is used in train_cfg
and validation is done after every train epoch, as follows:
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=12, val_begin=1, val_interval=1)
Actually, both IterBasedTrainLoop
and EpochBasedTrainLoop
support dynamic interval, see the following example:
# Before 365001th iteration, we do evaluation every 5000 iterations.
# After 365000th iteration, we do evaluation every 368750 iterations,
# which means that we do evaluation at the end of training.
interval = 5000
max_iters = 368750
dynamic_intervals = [(max_iters // interval * interval + 1, max_iters)]
train_cfg = dict(
type='IterBasedTrainLoop',
max_iters=max_iters,
val_interval=interval,
dynamic_intervals=dynamic_intervals)
MMEngine provides many useful hooks, but there are some occasions when the users might need to implement a new hook. MMDetection3D supports customized hooks in training based on MMEngine after v1.1.0rc0. Thus the users could implement a hook directly in mmdet3d or their mmdet3d-based codebases and use the hook by only modifying the config in training. Here we give an example of creating a new hook in mmdet3d and using it in training.
from mmengine.hooks import Hook
from mmdet3d.registry import HOOKS
@HOOKS.register_module()
class MyHook(Hook):
def __init__(self, a, b):
def before_run(self, runner) -> None:
def after_run(self, runner) -> None:
def before_train(self, runner) -> None:
def after_train(self, runner) -> None:
def before_train_epoch(self, runner) -> None:
def after_train_epoch(self, runner) -> None:
def before_train_iter(self,
runner,
batch_idx: int,
data_batch: DATA_BATCH = None) -> None:
def after_train_iter(self,
runner,
batch_idx: int,
data_batch: DATA_BATCH = None,
outputs: Optional[dict] = None) -> None:
Depending on the functionality of the hook, users need to specify what the hook will do at each stage of the training in before_run
, after_run
, before_train
, after_train
, before_train_epoch
, after_train_epoch
, before_train_iter
, and after_train_iter
. There are more points where hooks can be inserted, refer to base hook class for more details.
Then we need to make MyHook
imported. Assuming the file is in mmdet3d/engine/hooks/my_hook.py
, there are two ways to do that:
-
Modify
mmdet3d/engine/hooks/__init__.py
to import it.The newly defined module should be imported in
mmdet3d/engine/hooks/__init__.py
so that the registry will find the new module and add it:from .my_hook import MyHook
-
Use
custom_imports
in the config to manually import it.custom_imports = dict(imports=['mmdet3d.engine.hooks.my_hook'], allow_failed_imports=False)
custom_hooks = [
dict(type='MyHook', a=a_value, b=b_value)
]
You can also set the priority of the hook by adding key priority
to 'NORMAL'
or 'HIGHEST'
as below:
custom_hooks = [
dict(type='MyHook', a=a_value, b=b_value, priority='NORMAL')
]
By default the hook's priority is set as NORMAL
during registration.
If the hook is already implemented in MMDetection3D, you can directly modify the config to use the hook as below.
We implement a customized hook named DisableObjectSampleHook to disable ObjectSample
augmentation during training after specified epoch.
We can set it in the config file if needed:
custom_hooks = [dict(type='DisableObjectSampleHook', disable_after_epoch=15)]
There are some common hooks that are registered through default_hooks
, they are
IterTimerHook
: A hook that logs 'data_time' for loading data and 'time' for a model training step.LoggerHook
: A hook that collects logs from different components ofRunner
and writes them to terminal, json file, tensorboard and wandb etc.ParamSchedulerHook
: A hook that updates some hyper-parameters in optimizer, e.g., learning rate and momentum.CheckpointHook
: A hook that saves checkpoints periodically.DistSamplerSeedHook
: A hook that sets the seed for sampler and batch_sampler.Det3DVisualizationHook
: A hook used to visualize validation and testing process prediction results.
IterTimerHook
, ParamSchedulerHook
and DistSamplerSeedHook
are simple and no need to be modified usually, so here we reveal what we can do with LoggerHook
, CheckpointHook
and Det3DVisualizationHook
.
Except saving checkpoints periodically, CheckpointHook
provides other options such as max_keep_ckpts
, save_optimizer
and etc. The users could set max_keep_ckpts
to only save small number of checkpoints or decide whether to store state dict of optimizer by save_optimizer
. More details of the arguments are here.
default_hooks = dict(
checkpoint=dict(
type='CheckpointHook',
interval=1,
max_keep_ckpts=3,
save_optimizer=True))
The LoggerHook
enables setting intervals. Detailed instructions can be found in the docstring.
default_hooks = dict(logger=dict(type='LoggerHook', interval=50))
Det3DVisualizationHook
use DetLocalVisualizer
to visualize prediction results, and Det3DLocalVisualizer
current supports different backends, e.g., TensorboardVisBackend
and WandbVisBackend
(see docstring for more details). The users could add multi backends to do visualization as follows.
default_hooks = dict(
visualization=dict(type='Det3DVisualizationHook', draw=True))
vis_backends = [dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend')]
visualizer = dict(
type='Det3DLocalVisualizer', vis_backends=vis_backends, name='visualizer')