Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features

11/30/2021
by   Byeonghu Na, et al.
0

Linguistic knowledge has brought great benefits to scene text recognition by providing semantics to refine character sequences. However, since linguistic knowledge has been applied individually on the output sequence, previous methods have not fully utilized the semantics to understand visual clues for text recognition. This paper introduces a novel method, called Multi-modAl Text Recognition Network (MATRN), that enables interactions between visual and semantic features for better recognition performances. Specifically, MATRN identifies visual and semantic feature pairs and encodes spatial information into semantic features. Based on the spatial encoding, visual and semantic features are enhanced by referring to related features in the other modality. Furthermore, MATRN stimulates combining semantic features into visual features by hiding visual clues related to the character in the training phase. Our experiments demonstrate that MATRN achieves state-of-the-art performances on seven benchmarks with large margins, while naive combinations of two modalities show marginal improvements. Further ablative studies prove the effectiveness of our proposed components. Our implementation will be publicly available.

READ FULL TEXT
research
12/02/2021

Visual-Semantic Transformer for Scene Text Recognition

Modeling semantic information is helpful for scene text recognition. In ...
research
08/22/2021

From Two to One: A New Scene Text Recognizer with Visual Language Modeling Network

In this paper, we abandon the dominant complex language model and rethin...
research
11/22/2021

CDistNet: Perceiving Multi-Domain Character Distance for Robust Text Recognition

The attention-based encoder-decoder framework is becoming popular in sce...
research
09/20/2022

An Efficient End-to-End Transformer with Progressive Tri-modal Attention for Multi-modal Emotion Recognition

Recent works on multi-modal emotion recognition move towards end-to-end ...
research
06/16/2023

M3PT: A Multi-Modal Model for POI Tagging

POI tagging aims to annotate a point of interest (POI) with some informa...
research
06/28/2021

Recurrent neural network transducer for Japanese and Chinese offline handwritten text recognition

In this paper, we propose an RNN-Transducer model for recognizing Japane...
research
07/26/2021

Towards the Unseen: Iterative Text Recognition by Distilling from Errors

Visual text recognition is undoubtedly one of the most extensively resea...

Please sign up or login with your details

Forgot password? Click here to reset