Javanese part-of-speech tagging using cross-lingual transfer learning

Gabriel Enrique, Ika Alfina, Evi Yulianti

Research output: Contribution to journalArticlepeer-review

Abstract

Large datasets that are publicly available for part-of-speech (POS) tagging do not always exist for some languages. One of those languages is Javanese, a local language in Indonesia, which is considered as a low-resource language. This research aims to examine the effectiveness of cross-lingual transfer learning for Javanese POS tagging by fine-tuning the state-of-the-art transformer-based models (such as IndoBERT, mBERT, and XLM-RoBERTa) using different kinds of source languages that have a higher resource (such as Indonesian, English, Uyghur, Latin, and Hungarian languages), and then fine-tuning it again using the Javanese language as the target language. We found that the models using cross-lingual transfer learning can increase the accuracy of the models with-out using cross-lingual transfer learning by 14.3%–15.3% over long short-time memory (LSTM)-based models, and by 0.21%–3.95% over transformer-based models. Our results show that the most accurate Javanese POS tagger model is XLM-RoBERTa that is fine-tuned in two stages (the first one using Indonesian language as the source language, and the second one using Javanese language as the target language), capable of achieving an accuracy of 87.65%.

Original languageEnglish
Pages (from-to)3498-3509
Number of pages12
JournalIAES International Journal of Artificial Intelligence
Volume13
Issue number3
DOIs
Publication statusPublished - Sept 2024

Keywords

  • Cross-lingual transfer learning
  • Deep learning
  • Low-resource language
  • Part-of-speech tagging
  • Transformer

Fingerprint

Dive into the research topics of 'Javanese part-of-speech tagging using cross-lingual transfer learning'. Together they form a unique fingerprint.

Cite this