Adaptive Update in Deep Learning Algorithms for LiDAR Data Semantic Segmentation

Nur Hamid, Ari Wibisono, Ahmad Gamal, Ronni Ardhianto, Wisnu Jatmiko

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)

Abstract

LiDAR data widely replaces 2-dimensional data for geographic data representation because of its information complexity. One of the LiDAR data processing tasks is semantic segmentation which has been developed by deep learning models. These algorithms use Euclidean distance representation to express the distance between the points, whereas LiDAR data with random properties are not suitable to use this distance representation. Therefore, this study proposes the non-Euclidean distance representation which is adaptively updated using their covariance values. The proposed method results the accuracy of 75.55%, better than the baseline PointNet of 65.08% and Dynamic Graph CNN of 72.56% with the dataset from the author. This performance improvement is because of multiplication with the inverse covariance value of point cloud data increasing the points similarity to the class.

Original languageEnglish
Title of host publication2020 IEEE Region 10 Symposium, TENSYMP 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1038-1041
Number of pages4
ISBN (Electronic)9781728173665
DOIs
Publication statusPublished - 5 Jun 2020
Event2020 IEEE Region 10 Symposium, TENSYMP 2020 - Virtual, Dhaka, Bangladesh
Duration: 5 Jun 20207 Jun 2020

Publication series

Name2020 IEEE Region 10 Symposium, TENSYMP 2020

Conference

Conference2020 IEEE Region 10 Symposium, TENSYMP 2020
Country/TerritoryBangladesh
CityVirtual, Dhaka
Period5/06/207/06/20

Keywords

  • deep learning
  • graph convolutional network
  • land cover semantic segmentation
  • LiDAR data
  • non-Euclidean

Fingerprint

Dive into the research topics of 'Adaptive Update in Deep Learning Algorithms for LiDAR Data Semantic Segmentation'. Together they form a unique fingerprint.

Cite this