Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos

by   Ajay Kumar Tanwani, et al.

Learning meaningful visual representations in an embedding space can facilitate generalization in downstream tasks such as action segmentation and imitation. In this paper, we learn a motion-centric representation of surgical video demonstrations by grouping them into action segments/sub-goals/options in a semi-supervised manner. We present Motion2Vec, an algorithm that learns a deep embedding feature space from video observations by minimizing a metric learning loss in a Siamese network: images from the same action segment are pulled together while pushed away from randomly sampled images of other segments, while respecting the temporal ordering of the images. The embeddings are iteratively segmented with a recurrent neural network for a given parametrization of the embedding space after pre-training the Siamese network. We only use a small set of labeled video segments to semantically align the embedding space and assign pseudo-labels to the remaining unlabeled data by inference on the learned model parameters. We demonstrate the use of this representation to imitate surgical suturing motions from publicly available videos of the JIGSAWS dataset. Results give 85.5 average suggesting performance improvement over several state-of-the-art baselines, while kinematic pose imitation gives 0.94 centimeter error in position per observation on the test set. Videos, code and data are available at


page 1

page 5


Learning Multi-modal Representations by Watching Hundreds of Surgical Video Lectures

Recent advancements in surgical computer vision applications have been d...

Temporal Cycle-Consistency Learning

We introduce a self-supervised representation learning method based on t...

Mapping Temporary Slums from Satellite Imagery using a Semi-Supervised Approach

One billion people worldwide are estimated to be living in slums, and do...

Self-Taught Metric Learning without Labels

We present a novel self-taught framework for unsupervised metric learnin...

Action Shuffle Alternating Learning for Unsupervised Action Segmentation

This paper addresses unsupervised action segmentation. Prior work captur...

Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical Scene Segmentation with Limited Annotations

Surgical scene segmentation is fundamentally crucial for prompting cogni...

Learning Actionable Representations from Visual Observations

In this work we explore a new approach for robots to teach themselves ab...

Please sign up or login with your details

Forgot password? Click here to reset