TY - GEN
T1 - Differentially private optimization algorithms for deep neural networks
AU - Gylberth, Roan
AU - Adnan, Risman
AU - Yazid, Setiadi
AU - Basaruddin, T.
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/7/2
Y1 - 2017/7/2
N2 - Deep neural network based models has showed excellent ability in solving complex learning tasks in computer vision, speech recognition and natural language processing. Deep neural network learns data representations by solving specific learning task from the input data. Several optimization algorithms such as SGD, Momentum, Nesterov, RMSProp, and Adam were commonly used to minimize the loss function of deep neural networks model. At some point, the model may leak some information about the training data. To mitigate this leakage, differentially private optimization algorithm can be used to train the neural network model. In this paper, differentially private Momentum, Nesterov, RMSProp, and Adam algorithms were developed and used to train deep neural networks models like DNN and CNN. It was shown that those differentially private optimization algorithms can perform better than differentially private SGD, yielding higher model accuracy and faster convergence.
AB - Deep neural network based models has showed excellent ability in solving complex learning tasks in computer vision, speech recognition and natural language processing. Deep neural network learns data representations by solving specific learning task from the input data. Several optimization algorithms such as SGD, Momentum, Nesterov, RMSProp, and Adam were commonly used to minimize the loss function of deep neural networks model. At some point, the model may leak some information about the training data. To mitigate this leakage, differentially private optimization algorithm can be used to train the neural network model. In this paper, differentially private Momentum, Nesterov, RMSProp, and Adam algorithms were developed and used to train deep neural networks models like DNN and CNN. It was shown that those differentially private optimization algorithms can perform better than differentially private SGD, yielding higher model accuracy and faster convergence.
UR - http://www.scopus.com/inward/record.url?scp=85051116199&partnerID=8YFLogxK
U2 - 10.1109/ICACSIS.2017.8355063
DO - 10.1109/ICACSIS.2017.8355063
M3 - Conference contribution
AN - SCOPUS:85051116199
T3 - 2017 International Conference on Advanced Computer Science and Information Systems, ICACSIS 2017
SP - 387
EP - 393
BT - 2017 International Conference on Advanced Computer Science and Information Systems, ICACSIS 2017
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 9th International Conference on Advanced Computer Science and Information Systems, ICACSIS 2017
Y2 - 28 October 2017 through 29 October 2017
ER -