Cancer is still one of the diseases with a high mortality rate in the world. Histopathological image is one image that can be used to analyze the presence of Cancer in the human body. The deep learning approach as state of the art was conducted by researchers to investigate the image of Cancer. One of the deep learning architectures is Residual Network (ResNet). This architecture has the characteristics of additional input on the layer, when the training process which has an impact on the memory and processor extension during the training process. In this work, we propose the parallelization of the ResNet model by using three GTX-1080 of Graphics Processing Units (GPUs) to carry out the training process. The performance of all three GPUs can be seen from the utilization of the GPU processor and memory and speed up during the training. The advantage of using parallelization with multiple GPUs is to overcome the out of memory in larger batch-size usage that cannot be handled by the use of a single GPU. This study uses various batch-sizes ranging from 8, 16, 24 and 32 as research scenarios. The results showed that the utilization of processor and memory is more efficient for larger batch-size. As a result, the average utilization of processors for GPU 1, 2, and 3 is 66%, 61.5%, and 81.5%, respectively. Meanwhile, memory GPU utilization is 44%, 40%, and 48.2%.