Learning How To Robustly Estimate Camera Pose in Endoscopic Videos

by   Michel Hayoz, et al.

Purpose: Surgical scene understanding plays a critical role in the technology stack of tomorrow's intervention-assisting systems in endoscopic surgeries. For this, tracking the endoscope pose is a key component, but remains challenging due to illumination conditions, deforming tissues and the breathing motion of organs. Method: We propose a solution for stereo endoscopes that estimates depth and optical flow to minimize two geometric losses for camera pose estimation. Most importantly, we introduce two learned adaptive per-pixel weight mappings that balance contributions according to the input image content. To do so, we train a Deep Declarative Network to take advantage of the expressiveness of deep-learning and the robustness of a novel geometric-based optimization approach. We validate our approach on the publicly available SCARED dataset and introduce a new in-vivo dataset, StereoMIS, which includes a wider spectrum of typically observed surgical settings. Results: Our method outperforms state-of-the-art methods on average and more importantly, in difficult scenarios where tissue deformations and breathing motion are visible. We observed that our proposed weight mappings attenuate the contribution of pixels on ambiguous regions of the images, such as deforming tissues. Conclusion: We demonstrate the effectiveness of our solution to robustly estimate the camera pose in challenging endoscopic surgical scenes. Our contributions can be used to improve related tasks like simultaneous localization and mapping (SLAM) or 3D reconstruction, therefore advancing surgical scene understanding in minimally-invasive surgery.


Unsupervised Learning of Camera Pose with Compositional Re-estimation

We consider the problem of unsupervised camera pose estimation. Given an...

Temporally Guided Articulated Hand Pose Tracking in Surgical Videos

Articulated hand pose tracking is an underexplored problem that carries ...

EffiScene: Efficient Per-Pixel Rigidity Inference for Unsupervised Joint Learning of Optical Flow, Depth, Camera Pose and Motion Segmentation

This paper addresses the challenging unsupervised scene flow estimation ...

Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation

Estimation of 3D motion in a dynamic scene from a temporal pair of image...

E-DSSR: Efficient Dynamic Surgical Scene Reconstruction with Transformer-based Stereoscopic Depth Perception

Reconstructing the scene of robotic surgery from the stereo endoscopic v...

Globally Consistent Video Depth and Pose Estimation with Efficient Test-Time Training

Dense depth and pose estimation is a vital prerequisite for various vide...

Fast 5DOF Needle Tracking in iOCT

Purpose. Intraoperative Optical Coherence Tomography (iOCT) is an increa...

Please sign up or login with your details

Forgot password? Click here to reset