Naïve Bayes Classifier (NBC) is one of the most popular machine learning methods that has been applied in various fields, some of them being text classification, medical diagnosis and systems performance management. Claims about the excellence of this classifier's performance has been stated in several literatures. The main goal of this study is to assess the consistency of this classifier's performance by applying it to five medical datasets and compare the result with another popular classification method, Decision Tree (DT). Results present some empirical proofs that NBC has a consistent performance, performs better compared to DT by outperforming it in four out of the five datasets that are used and is robust to the presence of missing value. This consistent performance may be due to the nature and background of the datasets used, in which datasets with the right attribute variables tend to produce a high true positive rate and accuracy score.