Recent development on supervised auto encoder research gives promising solutions toward single sample face recognition problems. In this research, Kullback-Leibler Divergence (KLD) approach is proposed to obtain penalty of sparsity constraint for deep auto encoder learning process. This approach is tested using two datasets, Extended Yale B (cropped version) and LFWcrop. For comparison, Log and εL1also employed as sparsity regularizers. Experiment results confirm that KLD has better performance in image classification of both datasets compared to Log and εL1.
|Number of pages||4|
|Journal||Journal of Theoretical and Applied Information Technology|
|Publication status||Published - 1 May 2016|
- Deep auto encoder
- Kullback-Leibler Divergence
- Single sample face recognition
- Sparsity regularizer