A pre-processing method for data mining and machine learning is instance selection. The fundamental issue is that earlier methods still have trouble creating quality samples for classifier training. In this work, the system updates the specification of a few measures that were created for instance selection based on meta-learning. Moreover, the system Three sets of measurements, 59 databases, 16 instance selection techniques, two classifiers, and eight regression learners acting as meta-learners were all employed in an experimental research to compare the two. The findings imply that our metrics outperform those often employed by academics who used a meta-learning-based viewpoint perspective to approach instance selection. Compared to other papers used as benchmarks, this one offers learners more pertinent material. Based on the data that c-measures offer, the algorithm split them into three groups: overlap in property values, class separation, Manifold density, topology, and geometry. New criteria for dealing with meta-learning in IS are established by the approach employed in this work to construct the meta-data together with performance assessment. The system suggests an alternative strategy to the conventional one, in which the top candidate technique is chosen using a trained meta-classifier. Regression meta-learners that had been trained to forecast each potential method's performance were employed in this instance. Such an approach offers additional flexibility for handling variable weight allocations to the objectives in multi-objective problems, such as IS.