In the past recent years, the applications of topological data analysis in machine learning context has grown rapidly and covered a wide variety of problems. This trend can be explained through the nature of topological signatures which offers a compact description about the shape of data. One of the most used method of topological data analysis is persistent homology. Some studies have suggested that persistent homology could extract features that hardly noticed by other methods. Thus, incorporating persistent homology in machine learning frameworks could give additional information to the model and potentially produce better results. On the other hand, deep learning has shown its remarkable ability to automatically learn features from data. However, it is hard to know what kind of features it may learn and whether there are other useful features which are overlooked by the model. This paper reviews some of the applications of persistent homology in machine learning problems. More specifically, we try to focus on how topological features are exploited along side features which are generated from neural network layers. We systematically analyze the methods used to combine those features and why topological information can be helpful in those problems. The purpose of this study is to summarize evidence about the role of topological features as complementary information to a deep learning model and provide ideas about what kind of problem could benefit from it. We found that data with meaningful spatial relationship and insensitiveness towards tiny fluctuation could take advantage from using persistent homology.