Using Deep Convolutional Networks for Gesture Recognition in American Sign Language

10/18/2017
by   Vivek Bheda, et al.
0

In the realm of multimodal communication, sign language is, and continues to be, one of the most understudied areas. In line with recent advances in the field of deep learning, there are far reaching implications and applications that neural networks can have for sign language interpretation. In this paper, we present a method for using deep convolutional networks to classify images of both the the letters and digits in American Sign Language.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2021

Sign Language Production: A Review

Sign Language is the dominant yet non-primary form of communication lang...
research
04/08/2022

Vision-Based American Sign Language Classification Approach via Deep Learning

Hearing-impaired is the disability of partial or total hearing loss that...
research
01/05/2022

All You Need In Sign Language Production

Sign Language is the dominant form of communication language used in the...
research
03/19/2015

Sign Language Fingerspelling Classification from Depth and Color Images using a Deep Belief Network

Automatic sign language recognition is an open problem that has received...
research
09/20/2015

Telugu OCR Framework using Deep Learning

In this paper, we address the task of Optical Character Recognition(OCR)...
research
05/29/2020

Applying the Decisiveness and Robustness Metrics to Convolutional Neural Networks

We review three recently-proposed classifier quality metrics and conside...
research
04/27/2020

Continuous sign language recognition from wearable IMUs using deep capsule networks and game theory

Sign Language is used by the deaf community all over world. The work pre...

Please sign up or login with your details

Forgot password? Click here to reset