Research methods in machine learning play a pivotal role since the accuracy and reliability of the results are influenced by the research methods used. The main aims of this paper were to explore current research methods in machine learning, emerging themes, and the implications of those themes in machine learning research. To achieve this the researchers analyzed a total of 100 articles published since 2019 in IEEE journals. This study revealed that Machine learning uses quantitative research methods with experimental research design being the de facto research approach. The study also revealed that researchers nowadays use more than one algorithm to address a problem. Optimal feature selection has also emerged to be a key thing that researchers are using to optimize the performance of Machine learning algorithms. Confusion matrix and its derivatives are still the main ways used to evaluate the performance of algorithms, although researchers are now also considering the processing time taken by an algorithm to execute. Python programming languages together with its libraries are the most used tools in creating, training, and testing models. The most used algorithms in addressing both classification and prediction problems are; Naïve Bayes, Support Vector Machine, Random Forest, Artificial Neural Networks, and Decision Tree. The recurring themes identified in this study are likely to open new frontiers in Machine learning research.