mmdet 5: Customize Runtime Settings
We already support to use all the optimizers implemented by PyTorch, and the only modification is to change the
optimizer field of config files. For example, if you want to use
ADAM (note that the performance could drop a lot), the modification could be as the following.
optimizer = dict(type='Adam', lr=0.0003, weight_decay=0.0001)
To modify the learning rate of the model, the users only need to modify the
lr in the config of optimizer. The users can directly set arguments following the API doc of PyTorch.
A customized optimizer could be defined as following.
Assume you want to add a optimizer named
MyOptimizer, which has arguments
c. You need to create a new directory named
mmdet/core/optimizer. And then implement the new optimizer in a file, e.g., in
from .registry import OPTIMIZERS from torch.optim import Optimizer @OPTIMIZERS.register_module() class MyOptimizer(Optimizer): def __init__(self, a, b, c)
To find the above module defined above, this module should be imported into the main namespace at first. There are two options to achieve it.
mmdet/core/optimizer/__init__.pyto import it.The newly defined module should be imported in
mmdet/core/optimizer/__init__.pyso that the registry will find the new module and add it:
from .my_optimizer import MyOptimizer
custom_importsin the config to manually import it
custom_imports = dict(imports=['mmdet.core.optimizer.my_optimizer'], allow_failed_imports=False)
mmdet.core.optimizer.my_optimizer will be imported at the beginning of the program and the class
MyOptimizer is then automatically registered. Note that only the package containing the class
MyOptimizer should be imported.
mmdet.core.optimizer.my_optimizer.MyOptimizer cannot be imported directly.
Actually users can use a totally different file directory structure using this importing method, as long as the module root can be located in
Then you can use
optimizer field of config files. In the configs, the optimizers are defined by the field
optimizer like the following:
optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
To use your own optimizer, the field can be changed to
optimizer = dict(type='MyOptimizer', a=a_value, b=b_value, c=c_value)
Some models may have some parameter-specific settings for optimization, e.g. weight decay for BatchNorm layers. The users can do those fine-grained parameter tuning through customizing optimizer constructor.
from mmcv.utils import build_from_cfg from mmcv.runner.optimizer import OPTIMIZER_BUILDERS, OPTIMIZERS from mmdet.utils import get_root_logger from .my_optimizer import MyOptimizer @OPTIMIZER_BUILDERS.register_module() class MyOptimizerConstructor(object): def __init__(self, optimizer_cfg, paramwise_cfg=None): def __call__(self, model): return my_optimizer
The default optimizer constructor is implemented here, which could also serve as a template for new optimizer constructor.
Tricks not implemented by the optimizer should be implemented through optimizer constructor (e.g., set parameter-wise learning rates) or hooks. We list some common settings that could stabilize the training or accelerate the training. Feel free to create PR, issue for more settings.
- Use gradient clip to stabilize training: Some models need gradient clip to clip the gradients to stabilize the training process. An example is as below:optimizer_config = dict( _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) If your config inherits the base config which already sets the
optimizer_config, you might need
_delete_=Trueto override the unnecessary settings. See the config documentation for more details.
- Use momentum schedule to accelerate model convergence: We support momentum scheduler to modify model’s momentum according to learning rate, which could make the model converge in a faster way. Momentum scheduler is usually used with LR scheduler, for example, the following config is used in 3D detection to accelerate convergence. For more details, please refer to the implementation of CyclicLrUpdater and CyclicMomentumUpdater.lr_config = dict( policy=‘cyclic’, target_ratio=(10, 1e-4), cyclic_times=1, step_ratio_up=0.4, ) momentum_config = dict( policy=‘cyclic’, target_ratio=(0.85 / 0.95, 1), cyclic_times=1, step_ratio_up=0.4, )
By default we use step learning rate with 1x schedule, this calls
StepLRHook in MMCV. We support many other learning rate schedule here, such as
Poly schedule. Here are some examples
- Poly schedule:lr_config = dict(policy=‘poly’, power=0.9, min_lr=1e-4, by_epoch=False)
- ConsineAnnealing schedule:lr_config = dict( policy=‘CosineAnnealing’, warmup=‘linear’, warmup_iters=1000, warmup_ratio=1.0 / 10, min_lr_ratio=1e-5)
Workflow is a list of (phase, epochs) to specify the running order and epochs. By default it is set to be
workflow = [('train', 1)]
which means running 1 epoch for training. Sometimes user may want to check some metrics (e.g. loss, accuracy) about the model on the validate set. In such case, we can set the workflow as
[('train', 1), ('val', 1)]
so that 1 epoch for training and 1 epoch for validation will be run iteratively.
- The parameters of model will not be updated during val epoch.
total_epochsin the config only controls the number of training epochs and will not affect the validation workflow.
[('train', 1), ('val', 1)]and
[('train', 1)]will not change the behavior of
EvalHookis called by
after_train_epochand validation workflow only affect hooks that are called through
after_val_epoch. Therefore, the only difference between
[('train', 1), ('val', 1)]and
[('train', 1)]is that the runner will calculate losses on validation set after each training epoch.
There are some occasions when the users might need to implement a new hook. MMDetection supports customized hooks in training (#3395) since v2.3.0. Thus the users could implement a hook directly in mmdet or their mmdet-based codebases and use the hook by only modifying the config in training. Before v2.3.0, the users need to modify the code to get the hook registered before training starts. Here we give an example of creating a new hook in mmdet and using it in training.
from mmcv.runner import HOOKS, Hook @HOOKS.register_module() class MyHook(Hook): def __init__(self, a, b): pass def before_run(self, runner): pass def after_run(self, runner): pass def before_epoch(self, runner): pass def after_epoch(self, runner): pass def before_iter(self, runner): pass def after_iter(self, runner): pass
Depending on the functionality of the hook, the users need to specify what the hook will do at each stage of the training in
Then we need to make
MyHook imported. Assuming the file is in
mmdet/core/utils/my_hook.py there are two ways to do that:
mmdet/core/utils/__init__.pyto import it.The newly defined module should be imported in
mmdet/core/utils/__init__.pyso that the registry will find the new module and add it:
from .my_hook import MyHook
custom_importsin the config to manually import it
custom_imports = dict(imports=['mmdet.core.utils.my_hook'], allow_failed_imports=False)
custom_hooks = [ dict(type='MyHook', a=a_value, b=b_value) ]
You can also set the priority of the hook by adding key
'HIGHEST' as below
custom_hooks = [ dict(type='MyHook', a=a_value, b=b_value, priority='NORMAL') ]
By default the hook’s priority is set as
NORMAL during registration.
If the hook is already implemented in MMCV, you can directly modify the config to use the hook as below
We implement a customized hook named NumClassCheckHook to check whether the
num_classes in head matches the length of
We set it in default_runtime.py.
custom_hooks = [dict(type='NumClassCheckHook')]
There are some common hooks that are not registered through
custom_hooks, they are
In those hooks, only the logger hook has the
VERY_LOW priority, others’ priority are
NORMAL. The above-mentioned tutorials already covers how to modify
lr_config. Here we reveals how what we can do with
The MMCV runner will use
checkpoint_config to initialize
checkpoint_config = dict(interval=1)
The users could set
max_keep_ckpts to only save only small number of checkpoints or decide whether to store state dict of optimizer by
save_optimizer. More details of the arguments are here
log_config wraps multiple logger hooks and enables to set intervals. Now MMCV supports
TensorboardLoggerHook. The detail usages can be found in the doc.
log_config = dict( interval=50, hooks=[ dict(type='TextLoggerHook'), dict(type='TensorboardLoggerHook') ])
The config of
evaluation will be used to initialize the
EvalHook. Except the key
interval, other arguments such as
metric will be passed to the
evaluation = dict(interval=1, metric='bbox')