FixMyPose: Pose Correctional Captioning and Retrieval

04/04/2021
by   Hyounghun Kim, et al.
3

Interest in physical therapy and individual exercises such as yoga/dance has increased alongside the well-being trend. However, such exercises are hard to follow without expert guidance (which is impossible to scale for personalized feedback to every trainee remotely). Thus, automated pose correction systems are required more than ever, and we introduce a new captioning dataset named FixMyPose to address this need. We collect descriptions of correcting a "current" pose to look like a "target" pose (in both English and Hindi). The collected descriptions have interesting linguistic properties such as egocentric relations to environment objects, analogous references, etc., requiring an understanding of spatial relations and commonsense knowledge about postures. Further, to avoid ML biases, we maintain a balance across characters with diverse demographics, who perform a variety of movements in several interior environments (e.g., homes, offices). From our dataset, we introduce the pose-correctional-captioning task and its reverse target-pose-retrieval task. During the correctional-captioning task, models must generate descriptions of how to move from the current to target pose image, whereas in the retrieval task, models should select the correct target pose given the initial pose and correctional description. We present strong cross-attention baseline models (uni/multimodal, RL, multilingual) and also show that our baselines are competitive with other models when evaluated on other image-difference datasets. We also propose new task-specific metrics (object-match, body-part-match, direction-match) and conduct human evaluation for more reliable evaluation, and we demonstrate a large human-model performance gap suggesting room for promising future work. To verify the sim-to-real transfer of our FixMyPose dataset, we collect a set of real images and show promising performance on these images.

READ FULL TEXT

page 2

page 3

page 7

page 12

page 13

page 15

page 16

page 17

research
10/21/2022

PoseScript: 3D Human Poses from Natural Language

Natural language is leveraged in many computer vision tasks such as imag...
research
07/08/2022

Automated Audio Captioning and Language-Based Audio Retrieval

This project involved participation in the DCASE 2022 Competition (Task ...
research
01/24/2020

TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval

We introduce a new multimodal retrieval task - TV show Retrieval (TVR), ...
research
03/19/2016

Generating Natural Questions About an Image

There has been an explosion of work in the vision & language community d...
research
07/08/2022

CoSIm: Commonsense Reasoning for Counterfactual Scene Imagination

As humans, we can modify our assumptions about a scene by imagining alte...
research
03/16/2018

Object Captioning and Retrieval with Natural Language

We address the problem of jointly learning vision and language to unders...
research
08/02/2023

Beyond Generic: Enhancing Image Captioning with Real-World Knowledge using Vision-Language Pre-Training Model

Current captioning approaches tend to generate correct but "generic" des...

Please sign up or login with your details

Forgot password? Click here to reset