当前位置: 首页 > 工具软件 > Detectron > 使用案例 >

detectron2和mmdetection对比

柯瀚海
2023-12-01

detectron2

整体结构介绍

detectron2的整体代码目录如下。
configs:示例配置文件合集,包括检测分割等网络模型的配置,像faster rcnn,cascade rcnn等。
datasets:数据集准备工作,主要就是各个数据集的基本结构,以及需要如何预处理。
demo:快速体验Detectron2,与Getting Started文档对应。如果想要体验Model ZOO中结果的内容就可以用这个。
detectron2:项目主要代码都在这里了。
dev:一些开发者会用到的脚本。
docker:没啥好介绍的。
docs:一些官方文档。
projects:基于Detectron2的三个项目,DensePose/TensorMask/TridentNet。
tests:单元测试类。
tools:常用脚本,如训练、benchmark、展示数据集等。

config配置

detectron2使用fvcore.common.config来配置各种超参数,其中各个模块的超参数可以在detectron2/config/defaults.py中找到关于INPUT、Dataset、DataLoader、FPN、Anchor generator等模块的超参数配置。

registry机制

detectron主要通过Registry来快速的搭建各个模块,最终达到使用搭积木的方式构建网络模型。例如:BACKBONE_REGISTRY = Registry(“BACKBONE”)、
META_ARCH_REGISTRY = Registry(“META_ARCH”)。可以参考etectron2/modeling/backone/resnet.py实现。

@BACKBONE_REGISTRY.register()
def build_resnet_backbone(cfg, input_shape):
    """
    Create a ResNet instance from config.

    Returns:
        ResNet: a :class:`ResNet` instance.
    """
    # need registration of new blocks/stems?
    norm = cfg.MODEL.RESNETS.NORM
    stem = BasicStem(
        in_channels=input_shape.channels,
        out_channels=cfg.MODEL.RESNETS.STEM_OUT_CHANNELS,
        norm=norm,
    )

    # fmt: off
    freeze_at           = cfg.MODEL.BACKBONE.FREEZE_AT
    out_features        = cfg.MODEL.RESNETS.OUT_FEATURES
    depth               = cfg.MODEL.RESNETS.DEPTH
    num_groups          = cfg.MODEL.RESNETS.NUM_GROUPS
    width_per_group     = cfg.MODEL.RESNETS.WIDTH_PER_GROUP
    bottleneck_channels = num_groups * width_per_group
    in_channels         = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS
    out_channels        = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS
    stride_in_1x1       = cfg.MODEL.RESNETS.STRIDE_IN_1X1
    res5_dilation       = cfg.MODEL.RESNETS.RES5_DILATION
    deform_on_per_stage = cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE
    deform_modulated    = cfg.MODEL.RESNETS.DEFORM_MODULATED
    deform_num_groups   = cfg.MODEL.RESNETS.DEFORM_NUM_GROUPS
    # fmt: on
    assert res5_dilation in {1, 2}, "res5_dilation cannot be {}.".format(res5_dilation)

    .........
     return ResNet(stem, stages, out_features=out_features, freeze_at=freeze_at)

可以看到使用了@BACKBONE_REGISTRY.register()装饰器定义backbone网络,如果backbone中有超参数需要在上节将的config配置中新增,这些配置都是自己定义的backbone网络中需要传入的参数。
META_ARCH_REGISTRY则是将BACKBONE_REGISTRY、PROPOSAL_GENERATOR_REGISTRY等结合到一起形成最终的整体模型架构。

data

detectron2对coco数据集的支持较好,如果是voc数据集可以通过脚本转化成coco数据集再进行训练,注册数据集的代码如下,主要关系的是DatasetCatalog 和 MetaCatalog , 前者用于注册数据集,后者用于对每个数据集记录元信息,如每个类别idx对应什么具体类别等。

CLASS_NAMES = ["__background__", 'pedestrian',  'rider', 'car', 'bus',  'train', 'truck',
               'traffic_light', 'traffic_cone', 'stop_sign', 'void_dynamic']
DATASET_ROOT = './data/custom_data'
ANN_ROOT = os.path.join(DATASET_ROOT, 'annotations')
TRAIN_PATH = os.path.join(DATASET_ROOT, 'images')
VAL_PATH = os.path.join(DATASET_ROOT, 'images')
TRAIN_JSON = os.path.join(ANN_ROOT, 'train.json')
VAL_JSON = os.path.join(ANN_ROOT, 'val.json')
PREDEFINED_SPLITS_DATASET = {
    "custom_train": (TRAIN_PATH, TRAIN_JSON),
    "custom_val": (VAL_PATH, VAL_JSON),
}


def plain_register_dataset():
    # 训练集
    DatasetCatalog.register(
        "custom_train", lambda: load_coco_json(TRAIN_JSON, TRAIN_PATH))
    MetadataCatalog.get("custom_train").set(thing_classes=CLASS_NAMES,
                                            evaluator_type='coco',
                                            json_file=TRAIN_JSON,
                                            image_root=TRAIN_PATH)
    DatasetCatalog.register(
        "custom_val", lambda: load_coco_json(VAL_JSON, VAL_PATH))
    MetadataCatalog.get("custom_val").set(thing_classes=CLASS_NAMES,
                                          evaluator_type='coco',
                                          json_file=VAL_JSON,
                                          image_root=VAL_PATH)

plain_register_dataset()


detectron2 对数据增加的操作并不友好,在初始yaml配置中几乎看不到关于augmentation的操作。

trainer

trainer的继承顺序TrainerBase->SimpleTrainer->DefaultTrainer。其中定义了深度学习训练三大件loss、model和optimizer。
其中定义了before_train、after_train、before_step、after_step、run_step等操作。
如果需要定义自己的trainer进行debug,要重新这些方法,顺便一说目前这种定义方式已经成为主流,mmdetection,FastReID和最新的开源代码都使用这种定义形式。自定义的trainer如下,可以看到重新了test_with_TTA、build_train_loader等方法。

class Trainer(DefaultTrainer):
    """
    This is the same Trainer except that we rewrite the
    `build_train_loader`/`resume_or_load` method.
    """

    def resume_or_load(self, resume=True):
        if not isinstance(self.checkpointer, AdetCheckpointer):
            # support loading a few other backbones
            self.checkpointer = AdetCheckpointer(
                self.model,
                self.cfg.OUTPUT_DIR,
                optimizer=self.optimizer,
                scheduler=self.scheduler,
            )
        super().resume_or_load(resume=resume)

    def train_loop(self, start_iter: int, max_iter: int):
        """
        Args:
            start_iter, max_iter (int): See docs above
        """
        logger = logging.getLogger("adet.trainer")
        logger.info("Starting training from iteration {}".format(start_iter))

        self.iter = self.start_iter = start_iter
        self.max_iter = max_iter

        with EventStorage(start_iter) as self.storage:
            self.before_train()
            for self.iter in range(start_iter, max_iter):
                self.before_step()
                self.run_step()
                self.after_step()
            self.after_train()

    def train(self):
        """
        Run training.

        Returns:
            OrderedDict of results, if evaluation is enabled. Otherwise None.
        """
        self.train_loop(self.start_iter, self.max_iter)
        if hasattr(self, "_last_eval_results") and comm.is_main_process():
            verify_results(self.cfg, self._last_eval_results)
            return self._last_eval_results

    @classmethod
    def build_train_loader(cls, cfg):
        """
        Returns:
            iterable

        It calls :func:`detectron2.data.build_detection_train_loader` with a customized
        DatasetMapper, which adds categorical labels as a semantic mask.
        """
        mapper = DatasetMapperWithBasis(cfg, True)
        return build_detection_train_loader(cfg, mapper=mapper)

    @classmethod
    def build_evaluator(cls, cfg, dataset_name, output_folder=None):
        """
        Create evaluator(s) for a given dataset.
        This uses the special metadata "evaluator_type" associated with each builtin dataset.
        For your own dataset, you can simply create an evaluator manually in your
        script and do not have to worry about the hacky if-else logic here.
        """
        if output_folder is None:
            output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
        evaluator_list = []
        evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type
        if evaluator_type in ["sem_seg", "coco_panoptic_seg"]:
            evaluator_list.append(
                SemSegEvaluator(
                    dataset_name,
                    distributed=True,
                    num_classes=cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES,
                    ignore_label=cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE,
                    output_dir=output_folder,
                )
            )
        if evaluator_type in ["coco", "coco_panoptic_seg"]:
            evaluator_list.append(COCOEvaluator(
                dataset_name, cfg, True, output_folder))
        if evaluator_type == "coco_panoptic_seg":
            evaluator_list.append(COCOPanopticEvaluator(
                dataset_name, output_folder))
        if evaluator_type == "pascal_voc":
            return PascalVOCDetectionEvaluator(dataset_name)
        if evaluator_type == "lvis":
            return LVISEvaluator(dataset_name, cfg, True, output_folder)
        if len(evaluator_list) == 0:
            raise NotImplementedError(
                "no Evaluator for the dataset {} with the type {}".format(
                    dataset_name, evaluator_type
                )
            )
        if len(evaluator_list) == 1:
            return evaluator_list[0]
        return DatasetEvaluators(evaluator_list)

    @classmethod
    def test_with_TTA(cls, cfg, model):
        logger = logging.getLogger("adet.trainer")
        # In the end of training, run an evaluation with TTA
        # Only support some R-CNN models.
        logger.info("Running inference with test-time augmentation ...")
        model = GeneralizedRCNNWithTTA(cfg, model)
        evaluators = [
            cls.build_evaluator(
                cfg, name, output_folder=os.path.join(
                    cfg.OUTPUT_DIR, "inference_TTA")
            )
            for name in cfg.DATASETS.TEST
        ]
        res = cls.test(cfg, model, evaluators)
        res = OrderedDict({k + "_TTA": v for k, v in res.items()})
        return res

mmdetection

mmdetection 是商汤科技(2018 COCO 目标检测挑战赛冠军)和香港中文大学开源的基于Pytorch实现的深度学习目标检测工具箱,性能强大,运算效率高,配置化编程,比较容易训练、测试。并且官方维护了一个mmdetection-to-tensorrt的库来进行工程化,这对公司实现自己的tensorrt plugin有帮助作用。

整体结构介绍

mmdetection的整体代码目录如下。
configs:示例配置文件合集,包括检测分割等网络模型的配置,像faster rcnn,cascade rcnn等。
demo:快速体验Detectron2,与Getting Started文档对应。如果想要体验Model ZOO中结果的内容就可以用这个。
mmdet:项目主要代码都在这里了。
docker:没啥好介绍的。
docs:一些官方文档。
tests:单元测试类。
tools:常用脚本,如训练、benchmark、展示数据集等。
mmdetection的整体结构和detectron2的差不多。

configs配置

mmdetection不像detectron2通过fvcore.common.config配置各种超参数来实现模型的构建,而是在configs/base/下边实现对model,dataset,scheldules的配置,其配置文件的参数比较丰富如下:

# model settings
model = dict(
    type='CascadeRCNN',
    backbone=dict(
        type='ResNet',
        depth=50,
        num_stages=4,
        out_indices=(0, 1, 2, 3),
        frozen_stages=1,
        norm_cfg=dict(type='BN', requires_grad=True),
        norm_eval=True,
        style='pytorch',
        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
    neck=dict(
        type='FPN',
        in_channels=[256, 512, 1024, 2048],
        out_channels=256,
        num_outs=5),
    rpn_head=dict(
        type='RPNHead',
        in_channels=256,
        feat_channels=256,
        anchor_generator=dict(
            type='AnchorGenerator',
            scales=[8],
            ratios=[0.5, 1.0, 2.0],
            strides=[4, 8, 16, 32, 64]),
        bbox_coder=dict(
            type='DeltaXYWHBBoxCoder',
            target_means=[.0, .0, .0, .0],
            target_stds=[1.0, 1.0, 1.0, 1.0]),
        loss_cls=dict(
            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
        loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
    roi_head=dict(
        type='CascadeRoIHead',
        num_stages=3,
        stage_loss_weights=[1, 0.5, 0.25],
        bbox_roi_extractor=dict(
            type='SingleRoIExtractor',
            roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
            out_channels=256,
            featmap_strides=[4, 8, 16, 32]),
        bbox_head=[
            dict(
                type='Shared2FCBBoxHead',
                in_channels=256,
                fc_out_channels=1024,
                roi_feat_size=7,
                num_classes=80,
                bbox_coder=dict(
                    type='DeltaXYWHBBoxCoder',
                    target_means=[0., 0., 0., 0.],
                    target_stds=[0.1, 0.1, 0.2, 0.2]),
                reg_class_agnostic=True,
                loss_cls=dict(
                    type='CrossEntropyLoss',
                    use_sigmoid=False,
                    loss_weight=1.0),
                loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
                               loss_weight=1.0)),
            dict(
                type='Shared2FCBBoxHead',
                in_channels=256,
                fc_out_channels=1024,
                roi_feat_size=7,
                num_classes=80,
                bbox_coder=dict(
                    type='DeltaXYWHBBoxCoder',
                    target_means=[0., 0., 0., 0.],
                    target_stds=[0.05, 0.05, 0.1, 0.1]),
                reg_class_agnostic=True,
                loss_cls=dict(
                    type='CrossEntropyLoss',
                    use_sigmoid=False,
                    loss_weight=1.0),
                loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
                               loss_weight=1.0)),
            dict(
                type='Shared2FCBBoxHead',
                in_channels=256,
                fc_out_channels=1024,
                roi_feat_size=7,
                num_classes=80,
                bbox_coder=dict(
                    type='DeltaXYWHBBoxCoder',
                    target_means=[0., 0., 0., 0.],
                    target_stds=[0.033, 0.033, 0.067, 0.067]),
                reg_class_agnostic=True,
                loss_cls=dict(
                    type='CrossEntropyLoss',
                    use_sigmoid=False,
                    loss_weight=1.0),
                loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
        ],
        mask_roi_extractor=dict(
            type='SingleRoIExtractor',
            roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
            out_channels=256,
            featmap_strides=[4, 8, 16, 32]),
        mask_head=dict(
            type='FCNMaskHead',
            num_convs=4,
            in_channels=256,
            conv_out_channels=256,
            num_classes=80,
            loss_mask=dict(
                type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))),
    # model training and testing settings
    train_cfg=dict(
        rpn=dict(
            assigner=dict(
                type='MaxIoUAssigner',
                pos_iou_thr=0.7,
                neg_iou_thr=0.3,
                min_pos_iou=0.3,
                match_low_quality=True,
                ignore_iof_thr=-1),
            sampler=dict(
                type='RandomSampler',
                num=256,
                pos_fraction=0.5,
                neg_pos_ub=-1,
                add_gt_as_proposals=False),
            allowed_border=0,
            pos_weight=-1,
            debug=False),
        rpn_proposal=dict(
            nms_pre=2000,
            max_per_img=2000,
            nms=dict(type='nms', iou_threshold=0.7),
            min_bbox_size=0),
        rcnn=[
            dict(
                assigner=dict(
                    type='MaxIoUAssigner',
                    pos_iou_thr=0.5,
                    neg_iou_thr=0.5,
                    min_pos_iou=0.5,
                    match_low_quality=False,
                    ignore_iof_thr=-1),
                sampler=dict(
                    type='RandomSampler',
                    num=512,
                    pos_fraction=0.25,
                    neg_pos_ub=-1,
                    add_gt_as_proposals=True),
                mask_size=28,
                pos_weight=-1,
                debug=False),
            dict(
                assigner=dict(
                    type='MaxIoUAssigner',
                    pos_iou_thr=0.6,
                    neg_iou_thr=0.6,
                    min_pos_iou=0.6,
                    match_low_quality=False,
                    ignore_iof_thr=-1),
                sampler=dict(
                    type='RandomSampler',
                    num=512,
                    pos_fraction=0.25,
                    neg_pos_ub=-1,
                    add_gt_as_proposals=True),
                mask_size=28,
                pos_weight=-1,
                debug=False),
            dict(
                assigner=dict(
                    type='MaxIoUAssigner',
                    pos_iou_thr=0.7,
                    neg_iou_thr=0.7,
                    min_pos_iou=0.7,
                    match_low_quality=False,
                    ignore_iof_thr=-1),
                sampler=dict(
                    type='RandomSampler',
                    num=512,
                    pos_fraction=0.25,
                    neg_pos_ub=-1,
                    add_gt_as_proposals=True),
                mask_size=28,
                pos_weight=-1,
                debug=False)
        ]),
    test_cfg=dict(
        rpn=dict(
            nms_pre=1000,
            max_per_img=1000,
            nms=dict(type='nms', iou_threshold=0.7),
            min_bbox_size=0),
        rcnn=dict(
            score_thr=0.05,
            nms=dict(type='nms', iou_threshold=0.5),
            max_per_img=100,
            mask_thr_binary=0.5)))

# dataset settings
dataset_type = 'CocoDataset'
data_root = '/data/zhangyong/dataset/fridge2/dst/'
img_norm_cfg = dict(
    mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='LoadAnnotations', with_bbox=True),
    dict(type='Resize', img_scale=(640, 640), keep_ratio=True),
    dict(type='RandomFlip', flip_ratio=0.5),
    dict(type='Normalize', **img_norm_cfg),
    dict(type='Pad', size_divisor=32),
    dict(type='DefaultFormatBundle'),
    dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(640, 640),
        flip=False,
        transforms=[
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip'),
            dict(type='Normalize', **img_norm_cfg),
            dict(type='Pad', size_divisor=32),
            dict(type='ImageToTensor', keys=['img']),
            dict(type='Collect', keys=['img']),
        ])
]
data = dict(
    samples_per_gpu=2,
    workers_per_gpu=2,
    train=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/instances_train2017.json',
        img_prefix=data_root + 'JPEGImages/',
        pipeline=train_pipeline),
    val=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/instances_val2017.json',
        img_prefix=data_root + 'JPEGImages/',
        pipeline=test_pipeline),
    test=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/instances_val2017.json',
        img_prefix=data_root + 'JPEGImages/',
        pipeline=test_pipeline))
evaluation = dict(interval=1, metric='bbox')

registry机制

mmdetection也通过Registry来快速的搭建各个模块。例如:@BACKBONES.register_module()、@DETECTORS.register_module()。可以参考mmdet/models/下实现。

@BACKBONES.register_module()
class ResNeSt(ResNetV1d):
    """ResNeSt backbone.

    Args:
        groups (int): Number of groups of Bottleneck. Default: 1
        base_width (int): Base width of Bottleneck. Default: 4
        radix (int): Radix of SplitAttentionConv2d. Default: 2
        reduction_factor (int): Reduction factor of inter_channels in
            SplitAttentionConv2d. Default: 4.
        avg_down_stride (bool): Whether to use average pool for stride in
            Bottleneck. Default: True.
        kwargs (dict): Keyword arguments for ResNet.
    """

    arch_settings = {
        50: (Bottleneck, (3, 4, 6, 3)),
        101: (Bottleneck, (3, 4, 23, 3)),
        152: (Bottleneck, (3, 8, 36, 3)),
        200: (Bottleneck, (3, 24, 36, 3))
    }

    def __init__(self,
                 groups=1,
                 base_width=4,
                 radix=2,
                 reduction_factor=4,
                 avg_down_stride=True,
                 **kwargs):
        self.groups = groups
        self.base_width = base_width
        self.radix = radix
        self.reduction_factor = reduction_factor
        self.avg_down_stride = avg_down_stride
        super(ResNeSt, self).__init__(**kwargs)

    def make_res_layer(self, **kwargs):
        """Pack all blocks in a stage into a ``ResLayer``."""
        return ResLayer(
            groups=self.groups,
            base_width=self.base_width,
            base_channels=self.base_channels,
            radix=self.radix,
            reduction_factor=self.reduction_factor,
            avg_down_stride=self.avg_down_stride,
            **kwargs)

这里的超参数都可以在configs下边的py文件中找到对应的参数。

data

mmdection对coco和voc数据集的支持较好,可以从底层代码看出其大部分继承了torch的数据操作部分。CocoDataset继承CustomDatset,CustomDatset类里面就有熟悉的__getitem__函数。因此定义自己的数据集可以按下列步骤进行
1、在./mmdet/datasets/目录下新建一个.py用于定义自己的Dataset,如myDataset.py,

from .coco import CocoDataset
from .registry import DATASETS


@DATASETS.register_module
class MyDataset(CocoDataset):  # 继承CocoDataset,使用CocoDataset的初始化和加载函数,这里只需要自己定义本数据集中包含的类别
    CLASSES = ("pos",)  # 本数据集只包含1类,叫做pos

2、在./mmdet/datasets/init.py中加入自己定义的数据集

from .builder import build_dataset
from .cityscapes import CityscapesDataset
from .coco import CocoDataset
from .custom import CustomDataset
from .dataset_wrappers import ConcatDataset, RepeatDataset
from .loader import DistributedGroupSampler, GroupSampler, build_dataloader
from .registry import DATASETS
from .voc import VOCDataset
from .wider_face import WIDERFaceDataset
from .xml_style import XMLDataset
from .myDataset import MyDataset  # 添加

__all__ = [
    'CustomDataset', 'XMLDataset', 'CocoDataset', 'VOCDataset',
    'CityscapesDataset', 'GroupSampler', 'DistributedGroupSampler',
    'build_dataloader', 'ConcatDataset', 'RepeatDataset', 'WIDERFaceDataset',
    'DATASETS', 'build_dataset', 'MyDataset'  # 添加
]

3修改config文件中的dataset有关项

# dataset settings
dataset_type = 'myDataset'  # 添加

支持使用albument库,并且支持mosaic、mixup操作。

trainer

mmdetection训练使用的代码,大多来自mmcv库,mmcv 是一个基础库,主要分为两个部分,一部分是和 deep learning framework 无关的一些工具函数,比如 IO/Image/Video 相关的一些操作,另一部分是为 PyTorch 写的一套训练工具,可以大大减少用户需要写的代码量,同时让整个流程的定制变得容易。
整个train过程包括runner,hook,和batch_processor三大部分组成。

runner类

最基础BaseRunner __init__函数中包含常见的model,optimizer,_hooks等建立。实现了register_hook_from_cfg,call_hook的建立。
子类EpochBasedRunner和IterBasedRunner重写了train,val方法。具体的查看mmcv/runner.

hook

hook具体代码也位于mmcv中他包括下面的子类。


__all__ = [
    'HOOKS', 'Hook', 'CheckpointHook', 'ClosureHook', 'LrUpdaterHook',
    'OptimizerHook', 'IterTimerHook', 'DistSamplerSeedHook', 'EmptyCacheHook',
    'LoggerHook', 'MlflowLoggerHook', 'PaviLoggerHook', 'TextLoggerHook',
    'TensorboardLoggerHook', 'WandbLoggerHook', 'MomentumUpdaterHook'

其基类hook的定义如下,可以看到和detectron2一样定义了epoch,iter前后要进行的操作。

HOOKS = Registry('hook')
 
class Hook(object):
 
    def before_run(self, runner):
        pass
 
    def after_run(self, runner):
        pass
 
    def before_epoch(self, runner):
        pass
 
    def after_epoch(self, runner):
        pass
 
    def before_iter(self, runner):
        pass
 
    def after_iter(self, runner):
        pass
 
    def before_train_epoch(self, runner):
        self.before_epoch(runner)
 
    def before_val_epoch(self, runner):
        self.before_epoch(runner)
 
    def after_train_epoch(self, runner):
        self.after_epoch(runner)
 
    def after_val_epoch(self, runner):
        self.after_epoch(runner)
 
    def before_train_iter(self, runner):
        self.before_iter(runner)
 
    def before_val_iter(self, runner):
        self.before_iter(runner)
 
    def after_train_iter(self, runner):
        self.after_iter(runner)
 
    def after_val_iter(self, runner):
        self.after_iter(runner)
 
    def every_n_epochs(self, runner, n):
        return (runner.epoch + 1) % n == 0 if n > 0 else False
 
    def every_n_inner_iters(self, runner, n):
        return (runner.inner_iter + 1) % n == 0 if n > 0 else False
 
    def every_n_iters(self, runner, n):
        return (runner.iter + 1) % n == 0 if n > 0 else False
 
    def end_of_epoch(self, runner):
        return runner.inner_iter + 1 == len(runner.data_loader)

然后通过runner里面的call_hook函数来指定调用哪个阶段的函数。

mmdet2trt

依赖项torch2trt_dynamic和amirstan_plugin

git clone git@git.zhlh6.cn:grimoire/torch2trt_dynamic.git
cd torch2trt_dynamic
python setup.py develop
git clone --depth=1 git@git.zhlh6.cn:grimoire/amirstan_plugin.git
cd amirstan_plugin
git submodule update --init --progress --depth=1
mkdir build
cd build
cmake -DTENSORRT_DIR=${your_path_to_tensorrt} ..
make -j10

安装mmdetection-to-tensorrt

export AMIRSTAN_LIBRARY_PATH=<amirstan_plugin_root>/build/lib
python setup.py develop

engine and inference

在mmdet2trt/mmdet2trt.py可以生成engine文件
在tools/test.py下可以验证结果是否对齐

总结

由于mmdetection大部分调用了mmcv库,导致detectron2的代码结构要比mmdetection的好理解好多。
但是mmdetection在转onnx和trt方面比detectron2有优势。
mmdetection的更新速度比较快,像yolox开源后,mmdetection也是在短时间内实现。此外mmdetection的数据预处理比detectron2优秀,mosaic和mixup在最新版本已经集成。
个人看法是以mmdetection学习为主。detectron2做到会用即可。后续主要学习mmdetection的高级使用方法。

 类似资料: