Reducing Adversarial Vulnerability through Adaptive Training Batch Size

Research output: Contribution to journalArticlepeer-review

Abstract

Neural networks possess an ability to generalize well to data distribution, to an extent that they are capable of fitting to a randomly labeled data. But they are also known to be extremely sensitive to adversarial examples. Batch Normalization (BatchNorm), very commonly part of deep learning architecture, has been found to increase adversarial vulnerability. Fixup Initialization (Fixup Init) has been shown as an alternative to BatchNorm, which can considerably strengthen the networks against adversarial examples. This robustness can be improved further by employing smaller batch size in training. The latter, however, comes with a tradeoff in the form of a significant increase of training time (up to ten times longer when reducing batch size from the default 128 to 8 for ResNet-56). In this paper, we propose a workaround to this problem by starting the training with a small batch size and gradually increase it to larger ones during training. We empirically show that our proposal can still improve adversarial robustness (up to 5.73\%) of ResNet-56 with Fixup Init and default batch size of 128. At the same time, our proposal keeps the training time considerably shorter (only 4 times longer, instead of 10 times).
Original languageEnglish
JournalJurnal Ilmu Komputer dan Informasi
Volume14
Issue number1
DOIs
Publication statusPublished - 28 Feb 2021

Fingerprint

Dive into the research topics of 'Reducing Adversarial Vulnerability through Adaptive Training Batch Size'. Together they form a unique fingerprint.

Cite this