TY - JOUR
T1 - Javanese part-of-speech tagging using cross-lingual transfer learning
AU - Enrique, Gabriel
AU - Alfina, Ika
AU - Yulianti, Evi
N1 - Publisher Copyright:
© 2024, Institute of Advanced Engineering and Science. All rights reserved.
PY - 2024/9
Y1 - 2024/9
N2 - Large datasets that are publicly available for part-of-speech (POS) tagging do not always exist for some languages. One of those languages is Javanese, a local language in Indonesia, which is considered as a low-resource language. This research aims to examine the effectiveness of cross-lingual transfer learning for Javanese POS tagging by fine-tuning the state-of-the-art transformer-based models (such as IndoBERT, mBERT, and XLM-RoBERTa) using different kinds of source languages that have a higher resource (such as Indonesian, English, Uyghur, Latin, and Hungarian languages), and then fine-tuning it again using the Javanese language as the target language. We found that the models using cross-lingual transfer learning can increase the accuracy of the models with-out using cross-lingual transfer learning by 14.3%–15.3% over long short-time memory (LSTM)-based models, and by 0.21%–3.95% over transformer-based models. Our results show that the most accurate Javanese POS tagger model is XLM-RoBERTa that is fine-tuned in two stages (the first one using Indonesian language as the source language, and the second one using Javanese language as the target language), capable of achieving an accuracy of 87.65%.
AB - Large datasets that are publicly available for part-of-speech (POS) tagging do not always exist for some languages. One of those languages is Javanese, a local language in Indonesia, which is considered as a low-resource language. This research aims to examine the effectiveness of cross-lingual transfer learning for Javanese POS tagging by fine-tuning the state-of-the-art transformer-based models (such as IndoBERT, mBERT, and XLM-RoBERTa) using different kinds of source languages that have a higher resource (such as Indonesian, English, Uyghur, Latin, and Hungarian languages), and then fine-tuning it again using the Javanese language as the target language. We found that the models using cross-lingual transfer learning can increase the accuracy of the models with-out using cross-lingual transfer learning by 14.3%–15.3% over long short-time memory (LSTM)-based models, and by 0.21%–3.95% over transformer-based models. Our results show that the most accurate Javanese POS tagger model is XLM-RoBERTa that is fine-tuned in two stages (the first one using Indonesian language as the source language, and the second one using Javanese language as the target language), capable of achieving an accuracy of 87.65%.
KW - Cross-lingual transfer learning
KW - Deep learning
KW - Low-resource language
KW - Part-of-speech tagging
KW - Transformer
UR - http://www.scopus.com/inward/record.url?scp=85200057183&partnerID=8YFLogxK
U2 - 10.11591/ijai.v13.i3.pp3498-3509
DO - 10.11591/ijai.v13.i3.pp3498-3509
M3 - Article
AN - SCOPUS:85200057183
SN - 2089-4872
VL - 13
SP - 3498
EP - 3509
JO - IAES International Journal of Artificial Intelligence
JF - IAES International Journal of Artificial Intelligence
IS - 3
ER -