TY - JOUR
T1 - Deep Neural Network for Structured Data - A Case Study of Mortality Rate Prediction Caused by Air Quality
AU - Maharani, Dian
AU - Murfi, Hendri
N1 - Publisher Copyright:
© Published under licence by IOP Publishing Ltd.
PY - 2019/5/17
Y1 - 2019/5/17
N2 - The mortality rate is one of the important aspects in determining insurance premiums. The mortality rates have influenced by several factors, i.e., air quality. Therefore, we consider Deep Neural Network (DNN) model for prediction of the air quality-based mortality rate. In this paper, we examine two DNN architectures. The first architecture consists of five layers including an input layer, a hidden layer, two hidden dropout layers, and an output layer. The second architecture consists of four layers including an input layer, a hidden layer, a hidden dropout layer, and an output layer. We optimize dropout rates and activation functions to obtain the optimal accuracies. Our simulations show that the first DNN architecture produces a slightly better performance. The DNN architecture uses ReLu as activation function and applies a 40% dropout rate for both dropout hidden layers. This DNN architecture also gives slightly better accuracy than the standard one hidden layer Neural Networks.
AB - The mortality rate is one of the important aspects in determining insurance premiums. The mortality rates have influenced by several factors, i.e., air quality. Therefore, we consider Deep Neural Network (DNN) model for prediction of the air quality-based mortality rate. In this paper, we examine two DNN architectures. The first architecture consists of five layers including an input layer, a hidden layer, two hidden dropout layers, and an output layer. The second architecture consists of four layers including an input layer, a hidden layer, a hidden dropout layer, and an output layer. We optimize dropout rates and activation functions to obtain the optimal accuracies. Our simulations show that the first DNN architecture produces a slightly better performance. The DNN architecture uses ReLu as activation function and applies a 40% dropout rate for both dropout hidden layers. This DNN architecture also gives slightly better accuracy than the standard one hidden layer Neural Networks.
UR - http://www.scopus.com/inward/record.url?scp=85066296727&partnerID=8YFLogxK
U2 - 10.1088/1742-6596/1192/1/012010
DO - 10.1088/1742-6596/1192/1/012010
M3 - Conference article
AN - SCOPUS:85066296727
SN - 1742-6588
VL - 1192
JO - Journal of Physics: Conference Series
JF - Journal of Physics: Conference Series
IS - 1
M1 - 012010
T2 - 2nd International Conference on Data and Information Science, ICoDIS 2018
Y2 - 15 November 2018 through 16 November 2018
ER -