“…• Incremental change to current knowledge (e.g., [107,158]) as opposed to the outright dropping of previous knowledge (e.g., [68,167]); • Adapting the weight associated with learners versus no weighting: Weight adaptation represents an intermediate level of refinement in which the models denoting the ensemble remain unchanged but their relative contribution to the voting is modified [2,90,152]. Conversely, weightless frameworks emphasize plasticity and tend to drop weaker ensemble members immediately (e.g., [81,167]); • Classifier weight adaption versus data instance based weight adaptation: The weighing of votes from an ensemble is generally a function of either classifier performance [54,65,152] or of the data from which a member of the ensemble was constructed (e.g., [20,150]); • Identification of ensemble member for replacement: Various heuristics have been proposed for targeting the ensemble member for replacement when performance as a whole is deemed to be poor, e.g., replace the oldest [167] or member with least 'contribution' [107,171].…”