TY - JOUR
T1 - A Comparison of CNN-based Image Feature Extractors for Weld Defects Classification
AU - Purnomo, Tito wahyu
AU - Ramadhany, Harun al rasyid
AU - Jati, Hapsara hadi carita
AU - Handoko, Djati
PY - 2024
Y1 - 2024
N2 - Classification of the types of weld defects is one of the stages of evaluating radiographic images, which is an essential step in controlling the quality of welded joints in materials. By automating the weld defects classification based on deep learning and the CNN architecture, it is possible to overcome the limitations of visually or manually evaluating radiographic images. Good accuracy in classification models for weld defects requires the availability of sufficient datasets. In reality, however, the radiographic image dataset accessible to the public is limited and imbalanced between classes. Consequently, simple image cropping and augmentation techniques are implemented during the data preparation stage. To construct a weld defect classification model, we proposed to utilize the transfer learning method by employing a pre-trained CNN architecture as a feature extractor, including DenseNet201, InceptionV3, MobileNetV2, NASNetMobile, ResNet50V2, VGG16, VGG19, and Xception, which are linked to a simple classification model based on multilayer perceptron. The test results indicate that the three best classification models were obtained by using the DenseNet201 feature extractor with a test accuracy value of 100%, followed by ResNet50V2 and InceptionV3 with an accuracy of 99.17%. These outcomes are better compared to state-of-the-art classification models with a maximum of six classes of defects. The research findings may assist radiography experts in evaluating radiographic images more accurately and efficiently.
AB - Classification of the types of weld defects is one of the stages of evaluating radiographic images, which is an essential step in controlling the quality of welded joints in materials. By automating the weld defects classification based on deep learning and the CNN architecture, it is possible to overcome the limitations of visually or manually evaluating radiographic images. Good accuracy in classification models for weld defects requires the availability of sufficient datasets. In reality, however, the radiographic image dataset accessible to the public is limited and imbalanced between classes. Consequently, simple image cropping and augmentation techniques are implemented during the data preparation stage. To construct a weld defect classification model, we proposed to utilize the transfer learning method by employing a pre-trained CNN architecture as a feature extractor, including DenseNet201, InceptionV3, MobileNetV2, NASNetMobile, ResNet50V2, VGG16, VGG19, and Xception, which are linked to a simple classification model based on multilayer perceptron. The test results indicate that the three best classification models were obtained by using the DenseNet201 feature extractor with a test accuracy value of 100%, followed by ResNet50V2 and InceptionV3 with an accuracy of 99.17%. These outcomes are better compared to state-of-the-art classification models with a maximum of six classes of defects. The research findings may assist radiography experts in evaluating radiographic images more accurately and efficiently.
KW - CNN
KW - Transfer Learning
KW - Feature Extraction
KW - Radiographic Image;
KW - Weld Defects
UR - https://jurnal.uns.ac.id/ijap/article/view/72509
U2 - 10.13057/ijap.v14i1.72509
DO - 10.13057/ijap.v14i1.72509
M3 - Article
SN - 2089-0133
VL - 14
SP - 190
EP - 201
JO - INDONESIAN JOURNAL OF APPLIED PHYSICS
JF - INDONESIAN JOURNAL OF APPLIED PHYSICS
IS - 1
ER -