DETRs with Collaborative Hybrid Assignments Training

by   Zhuofan Zong, et al.

In this paper, we provide the observation that too few queries assigned as positive samples in DETR with one-to-one set matching leads to sparse supervisions on the encoder's output which considerably hurt the discriminative feature learning of the encoder and vice visa for attention learning in the decoder. To alleviate this, we present a novel collaborative hybrid assignments training scheme, namely Co-DETR, to learn more efficient and effective DETR-based detectors from versatile label assignment manners. This new training scheme can easily enhance the encoder's learning ability in end-to-end detectors by training the multiple parallel auxiliary heads supervised by one-to-many label assignments such as ATSS, FCOS, and Faster RCNN. In addition, we conduct extra customized positive queries by extracting the positive coordinates from these auxiliary heads to improve the training efficiency of positive samples in the decoder. In inference, these auxiliary heads are discarded and thus our method introduces no additional parameters and computational cost to the original detector while requiring no hand-crafted non-maximum suppression (NMS). We conduct extensive experiments to evaluate the effectiveness of the proposed approach on DETR variants, including DAB-DETR, Deformable-DETR, and H-Deformable-DETR. Specifically, we improve the basic Deformable-DETR by 5.8 state-of-the-art H-Deformable-DETR can still be improved from 57.9 the MS COCO val. Surprisingly, incorporated with the large-scale backbone MixMIM-g with 1-Billion parameters, we achieve the 64.5 test-dev, achieving superior performance with much fewer extra data sizes. Codes will be available at


page 2

page 3

page 8

page 11


NMS Strikes Back

Detection Transformer (DETR) directly transforms queries to unique objec...

FeatAug-DETR: Enriching One-to-Many Matching for DETRs with Feature Augmentation

One-to-one matching is a crucial design in DETR-like object detection fr...

Sparse DETR: Efficient End-to-End Object Detection with Learnable Sparsity

DETR is the first end-to-end object detector using a transformer encoder...

Group DETR: Fast Training Convergence with Decoupled One-to-Many Label Assignment

Detection Transformer (DETR) relies on One-to-One label assignment, i.e....

ML-Decoder: Scalable and Versatile Classification Head

In this paper, we introduce ML-Decoder, a new attention-based classifica...

Temporal Enhanced Training of Multi-view 3D Object Detector via Historical Object Prediction

In this paper, we propose a new paradigm, named Historical Object Predic...

iffDetector: Inference-aware Feature Filtering for Object Detection

Modern CNN-based object detectors focus on feature configuration during ...

Please sign up or login with your details

Forgot password? Click here to reset