TY - JOUR
T1 - Face Spoofing Detection using Inception-v3 on RGB Modal and Depth Modal
AU - Murni, Aniati
PY - 2023/3/1
Y1 - 2023/3/1
N2 - Face spoofing can provide inaccurate face verification results in the face recognition system. Deep learning has been widely used to solve face spoofing problems. In face spoofing detection, it is unnecessary to use the entire network layer to represent the difference between real and spoof features. This study detects face spoofing by cutting the Inception-v3 network and utilizing RGB modal, depth, and fusion approaches. The results showed that face spoofing detection has a good performance on the RGB and fusion models. Both models have better performance than the depth model because RGB modal can represent the difference between real and spoof features, and RGB modal dominate the fusion model. The RGB model has accuracy, precision, recall, F1-score, and AUC values obtained respectively 98.78%, 99.22%, 99.31.2%, 99.27%, and 0.9997 while the fusion model is 98.5%, 99.31%, 98.88%. 99.09%, and 0.9995, respectively. Our proposed method with cutting the Inception-v3 network to mixed6 successfully outperforms the previous study with accuracy up to 100% using the MSU MFSD benchmark dataset.
AB - Face spoofing can provide inaccurate face verification results in the face recognition system. Deep learning has been widely used to solve face spoofing problems. In face spoofing detection, it is unnecessary to use the entire network layer to represent the difference between real and spoof features. This study detects face spoofing by cutting the Inception-v3 network and utilizing RGB modal, depth, and fusion approaches. The results showed that face spoofing detection has a good performance on the RGB and fusion models. Both models have better performance than the depth model because RGB modal can represent the difference between real and spoof features, and RGB modal dominate the fusion model. The RGB model has accuracy, precision, recall, F1-score, and AUC values obtained respectively 98.78%, 99.22%, 99.31.2%, 99.27%, and 0.9997 while the fusion model is 98.5%, 99.31%, 98.88%. 99.09%, and 0.9995, respectively. Our proposed method with cutting the Inception-v3 network to mixed6 successfully outperforms the previous study with accuracy up to 100% using the MSU MFSD benchmark dataset.
UR - https://jiki.cs.ui.ac.id/index.php/jiki/article/view/1100
U2 - 10.21609/jiki.v16i1.1100
DO - 10.21609/jiki.v16i1.1100
M3 - Article
SN - 2502-9275
VL - 16
SP - 47
EP - 57
JO - Jurnal Ilmu Komputer dan Informasi
JF - Jurnal Ilmu Komputer dan Informasi
IS - 1
ER -