Ensemble learning is a popular classification method where many individual simple learners contribute to a final prediction. Constructing an ensemble of learners has been shown to often improve prediction accuracy over a single learner. Bagging and boosting are the most common ensemble methods, each with distinct advantages. While boosting methods are typically very tunable with numerous parameters, to date, the type of flexibility this allows has been missing for general bagging ensembles. In this paper, we propose a new tunable weighted bagged ensemble methodology, resulting in a very flexible method for classification. We explore the impact tunable weighting has on the votes of each learner in an ensemble and compare the results with pure bagging and the best known bagged ensemble method, namely, the random forest.