SVM has been given top consideration for addressing the challenging problem of data imbalance learning. Here, we conduct an empirical classification analysis of new UCI datasets that have different imbalance ratios, sizes and complexities. The experimentation consists of comparing the classification results of SVM with two other popular classifiers, Naive Bayes and decision tree C4.5, to explore their pros and cons. To make the comparative experiments more comprehensive and have a better idea about the learning performance of each classifier, we employ in total four performance metrics: Sensitive, Specificity, G-means and time-based efficiency. For each benchmark dataset, we perform an empirical search of the learning model through numerous training of the three classifiers under different parameter settings and performance measurements. This paper exposes the most significant results i.e. the highest performance achieved by each classifier for each dataset. In summary, SVM outperforms the other two classifiers in terms of Sensitive (or Specificity) for all the datasets, and is more accurate in terms of G-means when classifying large datasets.