Controllable Abstractive Summarization Using Multilingual Pretrained Language Model

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Citation (Scopus)

Abstract

By leveraging a multilingual language model, we show that CTRLSum [1], an abstractive summarization approach that can be controlled by keywords, improves baseline summarization system in four languages: English, Indonesian, Spanish, and French by 1.57 in terms of average ROUGE-1, with the Indonesian model achieving state-of-the-art results. We further provide novel analysis about the importance of keywords fed to CTRLSum which (1) shows hypothetical upper-bound results that outperform the state-of-the-art in all four languages by a large margin and (2) provides natural direction for future work to improve CTRLSum by improving the keyword prediction model.

Original languageEnglish
Title of host publication2022 10th International Conference on Information and Communication Technology, ICoICT 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages228-233
Number of pages6
ISBN (Electronic)9781665481656
DOIs
Publication statusPublished - 2022
Event10th International Conference on Information and Communication Technology, ICoICT 2022 - Virtual, Online, Indonesia
Duration: 2 Aug 20223 Aug 2022

Publication series

Name2022 10th International Conference on Information and Communication Technology, ICoICT 2022

Conference

Conference10th International Conference on Information and Communication Technology, ICoICT 2022
Country/TerritoryIndonesia
CityVirtual, Online
Period2/08/223/08/22

Keywords

  • controllable abstractive summarization
  • keyword
  • multilingual

Fingerprint

Dive into the research topics of 'Controllable Abstractive Summarization Using Multilingual Pretrained Language Model'. Together they form a unique fingerprint.

Cite this