Differentially private optimization algorithms for deep neural networks

Roan Gylberth, Risman Adnan, Setiadi Yazid, T. Basaruddin

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

12 Citations (Scopus)

Abstract

Deep neural network based models has showed excellent ability in solving complex learning tasks in computer vision, speech recognition and natural language processing. Deep neural network learns data representations by solving specific learning task from the input data. Several optimization algorithms such as SGD, Momentum, Nesterov, RMSProp, and Adam were commonly used to minimize the loss function of deep neural networks model. At some point, the model may leak some information about the training data. To mitigate this leakage, differentially private optimization algorithm can be used to train the neural network model. In this paper, differentially private Momentum, Nesterov, RMSProp, and Adam algorithms were developed and used to train deep neural networks models like DNN and CNN. It was shown that those differentially private optimization algorithms can perform better than differentially private SGD, yielding higher model accuracy and faster convergence.

Original languageEnglish
Title of host publication2017 International Conference on Advanced Computer Science and Information Systems, ICACSIS 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages387-393
Number of pages7
ISBN (Electronic)9781538631720
DOIs
Publication statusPublished - 2 Jul 2017
Event9th International Conference on Advanced Computer Science and Information Systems, ICACSIS 2017 - Jakarta, Indonesia
Duration: 28 Oct 201729 Oct 2017

Publication series

Name2017 International Conference on Advanced Computer Science and Information Systems, ICACSIS 2017
Volume2018-January

Conference

Conference9th International Conference on Advanced Computer Science and Information Systems, ICACSIS 2017
Country/TerritoryIndonesia
CityJakarta
Period28/10/1729/10/17

Fingerprint

Dive into the research topics of 'Differentially private optimization algorithms for deep neural networks'. Together they form a unique fingerprint.

Cite this