Machine learning is becoming an attractive topic for researchers and industrial firms in the area of computational intelligence because of its proven effectiveness and performance in resolving real-world problems. However, some challenges such as precise search, intelligent discovery and intelligent learning need to be addressed and solved. One most important challenge is the non-steady performance of various machine learning models during online learning and operation. Online learning is the ability of a machine-learning model to modernize information without retraining the scheme when new information is available. To address this challenge, we evaluate and analyze four widely used online machine learning models: Online Sequential Extreme Learning Machine (OSELM), Feature Adaptive OSELM (FA-OSELM), Knowledge Preserving OSELM (KP-OSELM), and Infinite Term Memory OSELM (ITM-OSELM). Specifically, we provide a testbed for the models by building a framework and configuring various evaluation scenarios given different factors in the topological and mathematical aspects of the models. Furthermore, we generate different characteristics of the time series to be learned. Results prove the real impact of the tested parameters and scenarios on the models. In terms of accuracy, KP-OSELM and ITM-OSELM are superior to OSELM and FA-OSELM. With regard to time efficiency related to the percentage of decreases in active features, ITM-OSELM is superior to KP-OSELM.