Towards Generalizable Surgical Activity Recognition Using Spatial Temporal Graph Convolutional Networks

01/11/2020
by   Duygu Sarikaya, et al.
0

Modeling and recognition of surgical activities poses an interesting research problem. Although a number of recent works studied automatic recognition of surgical activities, generalizability of these works across different tasks and different datasets remains a challenge. We introduce a modality robust to scene variation, based on spatial temporal graph representations of surgical tool structures for surgical activity recognition. To show the effectiveness of the proposed modality, we model and recognize surgical gestures. We construct spatial graphs connecting the joint pose estimations of surgical tools. Then, we connect each joint to the corresponding joint in the consecutive frames forming interframe edges representing the trajectory of the joint over time. We then learn hierarchical temporal relationships between these joints over time using Spatial Temporal Graph Convolutional Networks (ST-GCN). Our experimental results show that learned spatial temporal graph representations of surgical videos perform well in surgical gesture recognition even when used individually. We experiment our model on the Suturing task of the JIGSAWS dataset where the chance baseline for gesture recognition is 10 demonstrate 68 experimental results show that our model learns meaningful hierarchical spatial-temporal graph representations. These learned representations can be used either individually, in cascades or as a complementary modality in surgical activity recognition, therefore provide a benchmark. To our knowledge, our paper is the first to use spatial temporal graph representations based on pose estimations of surgical tools in surgical activity recognition.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset