TY - GEN
T1 - Variational Contrastive Log Ratio Upper Bound of Mutual Information for Training Generative Models
AU - Sinaga, Marshal Arijona
AU - Alhamidi, Machmud Roby
AU - Rachmadi, Muhammad Febrian
AU - Jatmiko, Wisnu
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Theoretically, a Generative adversarial network minimizes the Jensen-Shannon divergence between real data distribution and generated data distribution. This divergence is another form of mutual information between a mixture distribution and a binary distribution. It implies that we can build a similar generative model by optimizing the mutual information. This research proposes variational contrastive log-ratio upper bound vCLUB mutual information estimation on mixture distribution and the optimization algorithm to train two generative models. We call the models CLUB-sampling generative network (vCLUB-sampling GN) and vCLUB-non sampling generative network (vCLUB-non sampling GN). The results show that vCLUB-sampling outperforms GAN and vCLUB-non sampling GN on the MNIST dataset and has competitive results with GAN on the CIFAR-10 dataset. However, GAN outperforms vCLUB-non sampling GN on both datasets.
AB - Theoretically, a Generative adversarial network minimizes the Jensen-Shannon divergence between real data distribution and generated data distribution. This divergence is another form of mutual information between a mixture distribution and a binary distribution. It implies that we can build a similar generative model by optimizing the mutual information. This research proposes variational contrastive log-ratio upper bound vCLUB mutual information estimation on mixture distribution and the optimization algorithm to train two generative models. We call the models CLUB-sampling generative network (vCLUB-sampling GN) and vCLUB-non sampling generative network (vCLUB-non sampling GN). The results show that vCLUB-sampling outperforms GAN and vCLUB-non sampling GN on the MNIST dataset and has competitive results with GAN on the CIFAR-10 dataset. However, GAN outperforms vCLUB-non sampling GN on both datasets.
KW - generative model
KW - mutual information
KW - neural network
KW - variational upper bound minimization
UR - http://www.scopus.com/inward/record.url?scp=85124358084&partnerID=8YFLogxK
U2 - 10.1109/IWBIS53353.2021.9631869
DO - 10.1109/IWBIS53353.2021.9631869
M3 - Conference contribution
AN - SCOPUS:85124358084
T3 - Proceedings - IWBIS 2021: 6th International Workshop on Big Data and Information Security
SP - 9
EP - 16
BT - Proceedings - IWBIS 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 6th International Workshop on Big Data and Information Security, IWBIS 2021
Y2 - 23 October 2021 through 26 October 2021
ER -