The Use of Video Captioning for Fostering Physical Activity

by   Soheyla Amirian, et al.

Video Captioning is considered to be one of the most challenging problems in the field of computer vision. Video Captioning involves the combination of different deep learning models to perform object detection, action detection, and localization by processing a sequence of image frames. It is crucial to consider the sequence of actions in a video in order to generate a meaningful description of the overall action event. A reliable, accurate, and real-time video captioning method can be used in many applications. However, this paper focuses on one application: video captioning for fostering and facilitating physical activities. In broad terms, the work can be considered to be assistive technology. Lack of physical activity appears to be increasingly widespread in many nations due to many factors, the most important being the convenience that technology has provided in workplaces. The adopted sedentary lifestyle is becoming a significant public health issue. Therefore, it is essential to incorporate more physical movements into our daily lives. Tracking one's daily physical activities would offer a base for comparison with activities performed in subsequent days. With the above in mind, this paper proposes a video captioning framework that aims to describe the activities in a video and estimate a person's daily physical activity level. This framework could potentially help people trace their daily movements to reduce an inactive lifestyle's health risks. The work presented in this paper is still in its infancy. The initial steps of the application are outlined in this paper. Based on our preliminary research, this project has great merit.


Improved Actor Relation Graph based Group Activity Recognition

Video understanding is to recognize and classify different actions or ac...

Captioning Near-Future Activity Sequences

Most of the existing works on human activity analysis focus on recogniti...

In-Home Daily-Life Captioning Using Radio Signals

This paper aims to caption daily life –i.e., to create a textual descrip...

Human Action Sequence Classification

This paper classifies human action sequences from videos using a machine...

An Integrated Approach for Video Captioning and Applications

Physical computing infrastructure, data gathering, and algorithms have r...

Watch-n-Patch: Unsupervised Learning of Actions and Relations

There is a large variation in the activities that humans perform in thei...

TRECVID 2019: An Evaluation Campaign to Benchmark Video Activity Detection, Video Captioning and Matching, and Video Search Retrieval

The TREC Video Retrieval Evaluation (TRECVID) 2019 was a TREC-style vide...

Please sign up or login with your details

Forgot password? Click here to reset