AlignDet: Aligning Pre-training and Fine-tuning in Object Detection

(* denotes equal contribution)
1ByteDance Inc, 2Center for Research in Computer Vision, University of Central Florida
ICCV 2023
Interpolate start reference image.

There are data, model, and task discrepancies between the pre-training and fine-tuning.
Aligning these discrepancies achieves significant improvements across various settings on COCO.

Abstract

The paradigm of large-scale pre-training followed by downstream fine-tuning has been widely employed in various object detection algorithms. In this paper, we reveal discrepancies in data, model, and task between the pre-training and fine-tuning procedure in existing practices, which implicitly limit the detector's performance, generalization ability, and convergence speed. To this end, we propose AlignDet, a unified pre-training framework that can be adapted to various existing detectors to alleviate the discrepancies. AlignDet decouples the pre-training process into two stages, i.e., image-domain and box-domain pre-training. The image-domain pre-training optimizes the detection backbone to capture holistic visual abstraction, and box-domain pre-training learns instance-level semantics and task-aware concepts to initialize the parts out of the backbone. By incorporating the self-supervised pre-trained backbones, we can pre-train all modules for various detectors in an unsupervised paradigm. The extensive experiments demonstrate that AlignDet can achieve significant improvements across diverse protocols, such as detection algorithm, model backbone, data setting, and training schedule. For example, AlignDet improves FCOS by 5.3 mAP, RetinaNet by 2.1 mAP, Faster R-CNN by 3.3 mAP, and DETR by 2.3 mAP under fewer epochs.

Advantages of AlignDet in terms of Data, Model and Task.

Interpolate start reference image.

Comparison with other self-supervised pre-training methods on data, models and tasks aspects.
AlignDet achieves more efficient, adequate and detection-oriented pre-training.

Pipeline

Interpolate start reference image.

AlignDet decouples the pre-training into the image domain and the box domain.
The decoupled design contributes to simple, efficient, and adequate self-supervised pre-training.

Improvements Across Different Settings

Interpolate start reference image. Interpolate start reference image.

AlignDet Outperforms Other Pre-training Methods

Interpolate start reference image.

AlignDet Learns Good Classification and Regression Priors

Interpolate start reference image.

BibTeX

@InProceedings{AlignDet,
    author    = {Li, Ming and Wu, Jie and Wang, Xionghui and Chen, Chen and Qin, Jie and Xiao, Xuefeng and Wang, Rui and Zheng, Min and Pan, Xin},
    title     = {AlignDet: Aligning Pre-training and Fine-tuning in Object Detection},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    year      = {2023},
}