Chrome Extension
WeChat Mini Program
Use on ChatGLM

NLP-based Typo Correction Model for Croatian Language

International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO)(2022)

Cited 0|Views2
No score
Abstract
Spelling correction plays an important role when applied in complex NLP-based applications and pipelines. Many of the existing models and techniques are developed to support the English language as it is the richest language in terms of resources available for training such models. The good occasion is that few of the methodologies provide the opportunity to adapt to other, low-resource languages. In this paper, we explore the power of the Neuspell Toolkit for training an original spelling correction model for the Croatian language. The toolkit itself comprises ten different models, but for the purposes of our work, we use the leverage of pre-trained transformer networks due to their experimentally proven spelling correction efficiency in the English language. The comparison is performed over different pre-trained Subword BERT architectures, including BERT Multilingual, DistilBERT, and XLM-RoBERTa, due to their subword representation support for the Croatian language. Furthermore, the training is done as a sequence labeling task on a newly created parallel Croatian dataset where the noisy examples are synthetically generated, and the misspelled words are labeled with their correct version. Finally, the model is tested in-vivo as part of our originally developed speech-to-text model for the Croatian language.
More
Translated text
Key words
typo correction model,language,nlp-based
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined