The Sinkhorn divergence, a smooth and symmetric normalization version of entropy-regularized optimal transport (EOT) is a promising tool for Generative Adversarial Networks (GANs). However, understanding the dynamic of gradient algorithms for Sinkhorn-based GANs remains a big challenge. In this work, we consider the GANs minimax optimization problem using Sinkhorn divergence, in which smoothness and convexity properties of the objective function are critical factors for convergence and stability. We prove that GANs with convex-concave Sinkhorn divergence can converge to local Nash equilibrium using first-order simultaneous stochastic gradient descent-ascent (SimSGDA) algorithm under certain approximations. We further present a nonasymptotic analysis for the convergence rate of the SimSGDA using structural similarity index measure (SSIM). Our experiments suggest a convergence rate proportional to the inverse number of SGDA iterations tested on tiny-colored datasets (Cats and CelebA) and advanced neural architectures (DCGAN and ResNet). We demonstrate that SSIM is potential tool to measure convergence rate of the SimSGDA algorithm empirically.