Multiscale image aggregation for dental radiograph segmentation

Martin Leonard Tangel, Chastine Fatichah, Muhammad Rahmat Widyanto, Fangyan Dong, Kaoru Hirota

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Multiscale Image Aggregation (MIA) is proposed for dental radiograph segmentation, where a grayscale image segmentation method using neighborhood pixels evaluation and fuzzy inference is applied to its original image and three scaled-down images. The average segmentation result by employing the proposed method is more accurate than that obtained by employing the Otsu method, and it is robust against inconsistent contrast, uneven exposure, and pixel's noise of the radiograph. An experiment is performed using 122 dental radiographs covering periapical and bitewing radiographs from the Faculty of Dentistry University of Indonesia, which represent the real radiographs used in dentistry and forensics, and 77.7% average segmentation accuracy is obtained by comparing each automatic segmentation result with the corresponding manual segmentation result as a reference. This proposal is a crucial part in our automatic dental-based identification system that is under development. Since manual dental-based identification is widely used for personal identification, an accurate automatic dental-based identification system is helpful in assisting forensic experts in identifying a large number of victims. Thus it makes identification of victims of disasters such as the Indian Ocean Tsunami and Tohoku Earthquake manageable.

Original languageEnglish
Pages (from-to)388-396
Number of pages9
JournalJournal of Advanced Computational Intelligence and Intelligent Informatics
Volume16
Issue number3
Publication statusPublished - 1 May 2012

Keywords

  • Dental radiograph
  • Fuzzy inference
  • Image segmentation
  • Personal identification

Fingerprint

Dive into the research topics of 'Multiscale image aggregation for dental radiograph segmentation'. Together they form a unique fingerprint.

Cite this