Most of the biomedical datasets, including those of 'omics, population studies and surveys, are rectangular in shape and have few missing data. Recently, their sample sizes have grown significantly. Rigorous analyses on these large datasets demand considerably more efficient and more accurate algorithms.Machine learning (ML) algorithms have been used to classify outcomes in biomedical datasets, including random forests (RF), decision tree (DT), artificial neural networks (ANN) and support vector machine (SVM). However, their performance and efficiency in classifying multi-category outcomes in rectangular data are poorly understood. Therefore, we aimed to compare these metrics among the 4 ML algorithms. As an example, we created a large rectangular dataset using the female breast cancers in the Surveillance, Epidemiology, and End Results-18 (SEER-18) database which were diagnosed in 2004 and followed up until December 2016. The outcome was the 6-category cause of death, namely alive, non-breast cancer, breast cancer, cardiovascular disease, infection and other cause. We included 58 dichotomized features from ~53,000 patients. All analyses were performed using MatLab (version 2018a) and the 10-fold cross validation approach. The accuracy in classifying 6-category cause of death with DT, RF, ANN and SVM was 72.68%, 72.66%, 70.01% and 71.85%, respectively. Based on the information entropy and information gain of feature values, we optimized dimension reduction (i.e. reduce the number of features in models).We found 22 or more features were required to maintain the similar accuracy, while the running time decreased from 440s for 58 features to 90s for 22 features in RF, from 70s to 40s in ANN and from 440s to 80s in SVM. In summary, we here show that RF, DT, ANN and SVM had similar accuracy for classifying multi-category outcomes in this large rectangular dataset. Dimension reduction based on information gain will significantly increase model's efficiency while maintaining classification accuracy.