Cross-domain image-to-image translation provides mechanism to capture special characteristics of one image collection and convert into other image collection with different representations. Recent research on generative learning have produced powerful image-toimage translation methods in supervised setting, where paired training datasets are available. However, collecting paired training data is difficult, expensive and required manual authoring. We present an evaluation study of recent unsupervised Generative Adversarial Network (GAN) models that can learn to translate a facial image from a source domain X to a target domain Y without paired labeled training dataset. Each GAN model is trained on the same facial image dataset and comparable hyperparameters. We report a comparison result using same GAN model evaluation metrics.