With the rising number of actual data, the challenges of huge model operations and poor generalization capacity arise, making the selection of the right feature set a significant issue. This study proposes ImprovedRFECV, an enhanced feature selection approach for cross-validated recursive feature elimination (RFECV). The algorithm first increases the robustness of the optimal feature subset by randomly sampling different data, building multiple models, and comparing the scores. Simultaneously, L1 regularization term and L2 regularization term scores are introduced to thoroughly evaluate the value of each feature, reducing the influence on the anti-interference term and further improving the algorithm's accuracy and stability. Additionally, a multi-model ensemble learning framework is utilized to promote generalization ability and effectively prevent overfitting. Finally, a both-end expansion removal strategy is adopted to solve the problem of strong covariance of features while enhancing the flexibility of the algorithm. The experimental results show that, when compared to the RFECV algorithm, the ImprovedRFECV algorithm obtains fewer optimal average features and performs better on the optimal feature subset on five datasets from five different domains, demonstrating the algorithm's high robustness and generalization ability.