Parallel Vertex Diffusion for Unified Visual Grounding

03/13/2023
by   Zesen Cheng, et al.
0

Unified visual grounding pursues a simple and generic technical route to leverage multi-task data with less task-specific design. The most advanced methods typically present boxes and masks as vertex sequences to model referring detection and segmentation as an autoregressive sequential vertex generation paradigm. However, generating high-dimensional vertex sequences sequentially is error-prone because the upstream of the sequence remains static and cannot be refined based on downstream vertex information, even if there is a significant location gap. Besides, with limited vertexes, the inferior fitting of objects with complex contours restricts the performance upper bound. To deal with this dilemma, we propose a parallel vertex generation paradigm for superior high-dimension scalability with a diffusion model by simply modifying the noise dimension. An intuitive materialization of our paradigm is Parallel Vertex Diffusion (PVD) to directly set vertex coordinates as the generation target and use a diffusion model to train and infer. We claim that it has two flaws: (1) unnormalized coordinate caused a high variance of loss value; (2) the original training objective of PVD only considers point consistency but ignores geometry consistency. To solve the first flaw, Center Anchor Mechanism (CAM) is designed to convert coordinates as normalized offset values to stabilize the training loss value. For the second flaw, Angle summation loss (ASL) is designed to constrain the geometry difference of prediction and ground truth vertexes for geometry-level consistency. Empirical results show that our PVD achieves state-of-the-art in both referring detection and segmentation, and our paradigm is more scalable and efficient than sequential vertex generation with high-dimension data.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 8

research
04/13/2023

Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction

3D-aware image synthesis encompasses a variety of tasks, such as scene g...
research
09/03/2023

ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic Diffusion Models

Colonoscopy analysis, particularly automatic polyp segmentation and dete...
research
03/30/2022

SeqTR: A Simple yet Universal Network for Visual Grounding

In this paper, we propose a simple yet universal network termed SeqTR fo...
research
10/12/2022

A Generalist Framework for Panoptic Segmentation of Images and Videos

Panoptic segmentation assigns semantic and instance ID labels to every p...
research
05/12/2022

Learned Vertex Descent: A New Direction for 3D Human Model Fitting

We propose a novel optimization-based paradigm for 3D human model fittin...
research
05/15/2023

Laughing Matters: Introducing Laughing-Face Generation using Diffusion Models

Speech-driven animation has gained significant traction in recent years,...

Please sign up or login with your details

Forgot password? Click here to reset