AutoDepthNet: High Frame Rate Depth Map Reconstruction using Commodity Depth and RGB Cameras

05/24/2023
by   Peyman Gholami, et al.
0

Depth cameras have found applications in diverse fields, such as computer vision, artificial intelligence, and video gaming. However, the high latency and low frame rate of existing commodity depth cameras impose limitations on their applications. We propose a fast and accurate depth map reconstruction technique to reduce latency and increase the frame rate in depth cameras. Our approach uses only a commodity depth camera and color camera in a hybrid camera setup; our prototype is implemented using a Kinect Azure depth camera at 30 fps and a high-speed RGB iPhone 11 Pro camera captured at 240 fps. The proposed network, AutoDepthNet, is an encoder-decoder model that captures frames from the high-speed RGB camera and combines them with previous depth frames to reconstruct a stream of high frame rate depth maps. On GPU, with a 480 x 270 output resolution, our system achieves an inference time of 8 ms, enabling real-time use at up to 200 fps with parallel processing. AutoDepthNet can estimate depth values with an average RMS error of 0.076, a 44.5 compared to an optical flow-based comparison method. Our method can also improve depth map quality by estimating depth values for missing and invalidated pixels. The proposed method can be easily applied to existing depth cameras and facilitates the use of depth cameras in applications that require high-speed depth estimation. We also showcase the effectiveness of the framework in upsampling different sparse datasets e.g. video object segmentation. As a demonstration of our method, we integrated our framework into existing body tracking systems and demonstrated the robustness of the proposed method in such applications.

READ FULL TEXT

page 3

page 5

page 7

research
07/21/2022

Fusing Frame and Event Vision for High-speed Optical Flow for Edge Application

Optical flow computation with frame-based cameras provides high accuracy...
research
08/12/2017

Temporal Upsampling of Depth Maps Using a Hybrid Camera

In recent years consumer-level depth sensors have been adopted in variou...
research
07/14/2022

Accurate Ground-Truth Depth Image Generation via Overfit Training of Point Cloud Registration using Local Frame Sets

Accurate three-dimensional perception is a fundamental task in several c...
research
10/12/2016

Video Depth-From-Defocus

Many compelling video post-processing effects, in particular aesthetic f...
research
03/08/2023

EvConv: Fast CNN Inference on Event Camera Inputs For High-Speed Robot Perception

Event cameras capture visual information with a high temporal resolution...
research
01/09/2019

Neural RGB->D Sensing: Depth and Uncertainty from a Video Camera

Depth sensing is crucial for 3D reconstruction and scene understanding. ...
research
10/18/2022

MotionDeltaCNN: Sparse CNN Inference of Frame Differences in Moving Camera Videos

Convolutional neural network inference on video input is computationally...

Please sign up or login with your details

Forgot password? Click here to reset