Corpus Similarity Measures Remain Robust Across Diverse Languages
This paper experiments with frequency-based corpus similarity measures across 39 languages using a register prediction task. The goal is to quantify (i) the distance between different corpora from the same language and (ii) the homogeneity of individual corpora. Both of these goals are essential for measuring how well corpus-based linguistic analysis generalizes from one dataset to another. The problem is that previous work has focused on Indo-European languages, raising the question of whether these measures are able to provide robust generalizations across diverse languages. This paper uses a register prediction task to evaluate competing measures across 39 languages: how well are they able to distinguish between corpora representing different contexts of production? Each experiment compares three corpora from a single language, with the same three digital registers shared across all languages: social media, web pages, and Wikipedia. Results show that measures of corpus similarity retain their validity across different language families, writing systems, and types of morphology. Further, the measures remain robust when evaluated on out-of-domain corpora, when applied to low-resource languages, and when applied to different sets of registers. These findings are significant given our need to make generalizations across the rapidly increasing number of corpora available for analysis.
READ FULL TEXT