A New Corpus for Low-Resourced Sindhi Language with Word Embeddings

by   Wazir Ali, et al.

Representing words and phrases into dense vectors of real numbers which encode semantic and syntactic properties is a vital constituent in natural language processing (NLP). The success of neural network (NN) models in NLP largely rely on such dense word representations learned on the large unlabeled corpus. Sindhi is one of the rich morphological language, spoken by large population in Pakistan and India lacks corpora which plays an essential role of a test-bed for generating word embeddings and developing language independent NLP systems. In this paper, a large corpus of more than 61 million words is developed for low-resourced Sindhi language for training neural word embeddings. The corpus is acquired from multiple web-resources using web-scrappy. Due to the unavailability of open source preprocessing tools for Sindhi, the prepossessing of such large corpus becomes a challenging problem specially cleaning of noisy data extracted from web resources. Therefore, a preprocessing pipeline is employed for the filtration of noisy text. Afterwards, the cleaned vocabulary is utilized for training Sindhi word embeddings with state-of-the-art GloVe, Skip-Gram (SG), and Continuous Bag of Words (CBoW) word2vec algorithms. The intrinsic evaluation approach of cosine similarity matrix and WordSim-353 are employed for the evaluation of generated Sindhi word embeddings. Moreover, we compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) word representations. Our intrinsic evaluation results demonstrate the high quality of our generated Sindhi word embeddings using SG, CBoW, and GloVe as compare to SdfastText word representations.


page 1

page 2

page 3

page 4


Development of Word Embeddings for Uzbek Language

In this paper, we share the process of developing word embeddings for th...

More Romanian word embeddings from the RETEROM project

Automatically learned vector representations of words, also known as "wo...

How to Evaluate Word Representations of Informal Domain?

Diverse word representations have surged in most state-of-the-art natura...

Learning Word Embeddings from the Portuguese Twitter Stream: A Study of some Practical Aspects

This paper describes a preliminary study for producing and distributing ...

Deep learning model for Mongolian Citizens Feedback Analysis using Word Vector Embeddings

A large amount of feedback was collected over the years. Many feedback a...

An Ensemble Method for Producing Word Representations for the Greek Language

In this paper we present a new ensemble method, Continuous Bag-of-Skip-g...

Semantic Relatedness and Taxonomic Word Embeddings

This paper connects a series of papers dealing with taxonomic word embed...

Please sign up or login with your details

Forgot password? Click here to reset