Group DETR: Fast Training Convergence with Decoupled One-to-Many Label Assignment
Detection Transformer (DETR) relies on One-to-One label assignment, i.e., assigning one ground-truth (gt) object to only one positive object query, for end-to-end object detection and lacks the capability of exploiting multiple positive queries. We present a novel DETR training approach, named Group DETR, to support multiple positive queries. To be specific, we decouple the positives into multiple independent groups and keep only one positive per gt object in each group. We make simple modifications during training: (i) adopt K groups of object queries; (ii) conduct decoder self-attention on each group of object queries with the same parameters; (iii) perform One-to-One label assignment for each group, leading to K positive object queries for each gt object. In inference, we only use one group of object queries, making no modifications to both architecture and processes. We validate the effectiveness of the proposed approach on DETR variants, including Conditional DETR, DAB-DETR, DN-DETR, and DINO.
READ FULL TEXT